Skip to content

Commit fd3af66

Browse files
nerpaulaSimran-B
authored andcommitted
finalize content, remove unnecessary screenshots, apply to all versions
1 parent ecbb174 commit fd3af66

9 files changed

+16
-25
lines changed

site/content/ai-suite/graphrag/web-interface.md

Lines changed: 16 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: How to use GraphRAG in the Arango Data Platform web interface
33
menuTitle: Web Interface
44
weight: 20
55
description: >-
6-
Learn how to create, configure, and run a full GraphRAG workflow in just a few steps using the Platform web interface
6+
Learn how to create, configure, and run a full GraphRAG workflow in just a few steps
77
---
88
{{< tip >}}
99
The Arango Data Platform & AI Suite are available as a pre-release. To get
@@ -59,8 +59,6 @@ configure and start a new importer service job. Follow the steps below.
5959
the service is using **O4 Mini**.
6060
3. Enter your **OpenAI API Key**.
6161
4. Click the **Start importer service** button.
62-
63-
![Configure Importer service using OpenAI](../../images/graphrag-ui-configure-importer-openai.png)
6462
{{< /tab >}}
6563

6664
{{< tab "OpenRouter" >}}
@@ -75,8 +73,6 @@ configure and start a new importer service job. Follow the steps below.
7573
When using OpenRouter, you need both API keys because the LLM responses are served
7674
via OpenRouter while OpenAI is used for the embedding model.
7775
{{< /info >}}
78-
79-
![Configure Importer service using OpenRouter](../../images/graphrag-ui-configure-importer-openrouter.png)
8076
{{< /tab >}}
8177

8278
{{< tab "Triton LLM Host" >}}
@@ -88,13 +84,11 @@ via OpenRouter while OpenAI is used for the embedding model.
8884
Note that you must first register your model in MLflow. The [Triton LLM Host](../reference/triton-inference-server.md)
8985
service automatically downloads and loads models from the MLflow registry.
9086
{{< /info >}}
91-
92-
![Configure Importer service using Triton](../../images/graphrag-ui-configure-importer-triton.png)
9387
{{< /tab >}}
9488

9589
{{< /tabs >}}
9690

97-
See also the [GraphRAG Importer](../reference/importer.md) service documentation.
91+
See also the [Importer](../reference/importer.md) service documentation.
9892

9993
## Add data source
10094

@@ -152,8 +146,6 @@ the generated Knowledge Graph. To configure the retriever service, open the
152146
the service uses **O4 Mini**.
153147
3. Enter your **OpenAI API Key**.
154148
4. Click the **Start retriever service** button.
155-
156-
![Configure Retriever Service using OpenAI](../../images/graphrag-ui-configure-retriever-openai.png)
157149
{{< /tab >}}
158150

159151
{{< tab "OpenRouter" >}}
@@ -167,8 +159,6 @@ the generated Knowledge Graph. To configure the retriever service, open the
167159
When using OpenRouter, the LLM responses are served via OpenRouter while OpenAI
168160
is used for the embedding model.
169161
{{< /info >}}
170-
171-
![Configure Retriever Service using OpenRouter](../../images/graphrag-ui-configure-retriever-openrouter.png)
172162
{{< /tab >}}
173163

174164
{{< tab "Triton LLM Host" >}}
@@ -180,27 +170,28 @@ is used for the embedding model.
180170
Note that you must first register your model in MLflow. The [Triton LLM Host](../reference/triton-inference-server.md)
181171
service automatically downloads and loads models from the MLflow registry.
182172
{{< /info >}}
183-
184-
![Configure Retriever Service using Triton](../../images/graphrag-ui-configure-retriever-triton.png)
185173
{{< /tab >}}
186174

187175
{{< /tabs >}}
188176

189-
See also the [GraphRAG Retriever](../reference/retriever.md) documentation.
177+
See also the [Retriever](../reference/retriever.md) documentation.
190178

191179
## Chat with your Knowledge Graph
192180

193-
The Retriever service provides two search methods:
194-
- [Local search](../reference/retriever.md#local-search): Local queries let you
195-
explore specific nodes and their direct connections.
196-
- [Global search](../reference/retriever.md#global-search): Global queries uncover
197-
broader patters and relationships across the entire Knowledge Graph.
198-
199-
![Chat with your Knowledge Graph](../../images/graphrag-ui-chat.png)
181+
The chat interface provides two search methods:
182+
- **Instant search**: Instant queries provide fast responses.
183+
- **Deep search**: This option will take longer to return a response.
200184

201185
In addition to querying the Knowledge Graph, the chat service allows you to do the following:
202-
- Switch the search method from **Local Query** to **Global Query** and vice-versa
186+
- Switch the search method from **Instant search** to **Deep research** and vice-versa
203187
directly in the chat
204-
- Change the retriever service
188+
- Change or create a new retriever service
205189
- Clear the chat
206-
- Integrate the Knowledge Graph chat service into your own applications
190+
191+
## Integrate the Knowledge Graph chat service into your application
192+
193+
To integrate any service into your own applications,
194+
go to **Project Settings** and use the copy button next to each service to
195+
copy its integration endpoint. You cam make `POST` requests to the endpoints
196+
with your queries, the services accept `JSON` payloads and return structured
197+
responses for building custom interfaces.
-76.8 KB
Binary file not shown.
-138 KB
Binary file not shown.
Binary file not shown.
-96.8 KB
Binary file not shown.
-148 KB
Binary file not shown.
-60.6 KB
Binary file not shown.
-58.7 KB
Binary file not shown.
-65.8 KB
Binary file not shown.

0 commit comments

Comments
 (0)