You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/examples_notebooks/api_overview.ipynb
+9-8Lines changed: 9 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@
16
16
"source": [
17
17
"## API Overview\n",
18
18
"\n",
19
-
"This notebook provides a demonstration of how to interact with graphrag as a library using the API as opposed to the CLI. Note that graphrag's CLI actually connects to the library through this API for all operations."
19
+
"This notebook provides a demonstration of how to interact with graphrag as a library using the API as opposed to the CLI. Note that graphrag's CLI actually connects to the library through this API for all operations.\n"
20
20
]
21
21
},
22
22
{
@@ -48,16 +48,17 @@
48
48
"metadata": {},
49
49
"source": [
50
50
"## Prerequisite\n",
51
+
"\n",
51
52
"As a prerequisite to all API operations, a `GraphRagConfig` object is required. It is the primary means to control the behavior of graphrag and can be instantiated from a `settings.yaml` configuration file.\n",
52
53
"\n",
53
-
"Please refer to the [CLI docs](https://microsoft.github.io/graphrag/cli/#init) for more detailed information on how to generate the `settings.yaml` file."
54
+
"Please refer to the [CLI docs](https://microsoft.github.io/graphrag/cli/#init) for more detailed information on how to generate the `settings.yaml` file.\n"
54
55
]
55
56
},
56
57
{
57
58
"cell_type": "markdown",
58
59
"metadata": {},
59
60
"source": [
60
-
"### Generate a `GraphRagConfig` object"
61
+
"### Generate a `GraphRagConfig` object\n"
61
62
]
62
63
},
63
64
{
@@ -77,14 +78,14 @@
77
78
"source": [
78
79
"## Indexing API\n",
79
80
"\n",
80
-
"*Indexing* is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats."
81
+
"_Indexing_ is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats.\n"
81
82
]
82
83
},
83
84
{
84
85
"cell_type": "markdown",
85
86
"metadata": {},
86
87
"source": [
87
-
"## Build an index"
88
+
"## Build an index\n"
88
89
]
89
90
},
90
91
{
@@ -107,7 +108,7 @@
107
108
"source": [
108
109
"## Query an index\n",
109
110
"\n",
110
-
"To query an index, several index files must first be read into memory and passed to the query API."
111
+
"To query an index, several index files must first be read into memory and passed to the query API.\n"
111
112
]
112
113
},
113
114
{
@@ -138,7 +139,7 @@
138
139
"cell_type": "markdown",
139
140
"metadata": {},
140
141
"source": [
141
-
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response."
142
+
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response.\n"
142
143
]
143
144
},
144
145
{
@@ -154,7 +155,7 @@
154
155
"cell_type": "markdown",
155
156
"metadata": {},
156
157
"source": [
157
-
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model)."
158
+
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model).\n"
Copy file name to clipboardExpand all lines: docs/examples_notebooks/input_documents.ipynb
+7-7Lines changed: 7 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,7 @@
18
18
"\n",
19
19
"Newer versions of GraphRAG let you submit a dataframe directly instead of running through the input processing step. This notebook demonstrates with regular or update runs.\n",
20
20
"\n",
21
-
"If performing an update, the assumption is that your dataframe contains only the new documents to add to the index."
21
+
"If performing an update, the assumption is that your dataframe contains only the new documents to add to the index.\n"
22
22
]
23
23
},
24
24
{
@@ -54,7 +54,7 @@
54
54
"cell_type": "markdown",
55
55
"metadata": {},
56
56
"source": [
57
-
"### Generate a `GraphRagConfig` object"
57
+
"### Generate a `GraphRagConfig` object\n"
58
58
]
59
59
},
60
60
{
@@ -72,14 +72,14 @@
72
72
"source": [
73
73
"## Indexing API\n",
74
74
"\n",
75
-
"*Indexing* is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats."
75
+
"_Indexing_ is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats.\n"
76
76
]
77
77
},
78
78
{
79
79
"cell_type": "markdown",
80
80
"metadata": {},
81
81
"source": [
82
-
"## Build an index"
82
+
"## Build an index\n"
83
83
]
84
84
},
85
85
{
@@ -109,7 +109,7 @@
109
109
"source": [
110
110
"## Query an index\n",
111
111
"\n",
112
-
"To query an index, several index files must first be read into memory and passed to the query API."
112
+
"To query an index, several index files must first be read into memory and passed to the query API.\n"
113
113
]
114
114
},
115
115
{
@@ -140,7 +140,7 @@
140
140
"cell_type": "markdown",
141
141
"metadata": {},
142
142
"source": [
143
-
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response."
143
+
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response.\n"
144
144
]
145
145
},
146
146
{
@@ -156,7 +156,7 @@
156
156
"cell_type": "markdown",
157
157
"metadata": {},
158
158
"source": [
159
-
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model)."
159
+
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model).\n"
To initialize your workspace, first run the `graphrag init` command.
35
-
Since we have already configured a directory named `./christmas` in the previous step, run the following command:
36
45
37
46
```sh
38
-
graphrag init --root ./christmas
47
+
graphrag init
39
48
```
40
49
41
-
This will create two files:`.env` and `settings.yaml` in the `./christmas` directory.
50
+
This will create two files,`.env` and `settings.yaml`, and a directory `input`, in the current directory.
42
51
52
+
-`input` Location of text files to process with `graphrag`.
43
53
-`.env` contains the environment variables required to run the GraphRAG pipeline. If you inspect the file, you'll see a single environment variable defined,
44
54
`GRAPHRAG_API_KEY=<API_KEY>`. Replace `<API_KEY>` with your own OpenAI or Azure API key.
45
55
-`settings.yaml` contains the settings for the pipeline. You can modify this file to change the settings for the pipeline.
46
-
<br/>
56
+
57
+
### Download Sample Text
58
+
59
+
Get a copy of A Christmas Carol by Charles Dickens from a trusted source:
@@ -56,13 +75,14 @@ In addition to setting your API key, Azure OpenAI users should set the variables
56
75
```yaml
57
76
type: chat
58
77
model_provider: azure
78
+
model: gpt-4.1
79
+
deployment_name: <AZURE_DEPLOYMENT_NAME>
59
80
api_base: https://<instance>.openai.azure.com
60
81
api_version: 2024-02-15-preview # You can customize this for other versions
61
82
```
62
83
63
-
Most people tend to name their deployments the same as their model - if yours are different, add the `deployment_name` as well.
64
-
65
84
#### Using Managed Auth on Azure
85
+
66
86
To use managed auth, edit the auth_type in your model config and *remove* the api_key line:
67
87
68
88
```yaml
@@ -71,38 +91,34 @@ auth_type: azure_managed_identity # Default auth_type is is api_key
71
91
72
92
You will also need to login with [az login](https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli) and select the subscription with your endpoint.
73
93
74
-
## Running the Indexing pipeline
94
+
## Index
75
95
76
-
Now we're ready to run the pipeline!
96
+
Now we're ready to index!
77
97
78
98
```sh
79
-
graphrag index --root ./christmas
99
+
graphrag index
80
100
```
81
101
82
102

83
103
84
-
This process will usually take a few minutes to run. Once the pipeline is complete, you should see a new folder called `./christmas/output` with a series of parquet files.
104
+
This process will usually take a few minutes to run. Once the pipeline is complete, you should see a new folder called `./output` with a series of parquet files.
85
105
86
-
#Using the Query Engine
106
+
# Query
87
107
88
108
Now let's ask some questions using this dataset.
89
109
90
110
Here is an example using Global search to ask a high-level question:
91
111
92
112
```sh
93
-
graphrag query \
94
-
--root ./christmas \
95
-
--method global \
96
-
--query "What are the top themes in this story?"
113
+
graphrag query "What are the top themes in this story?"
97
114
```
98
115
99
116
Here is an example using Local search to ask a more specific question about a particular character:
100
117
101
118
```sh
102
119
graphrag query \
103
-
--root ./christmas \
104
-
--method local \
105
-
--query "Who is Scrooge and what are his main relationships?"
120
+
"Who is Scrooge and what are his main relationships?" \
121
+
--method local
106
122
```
107
123
108
124
Please refer to [Query Engine](query/overview.md) docs for detailed information about how to leverage our Local and Global search mechanisms for extracting meaningful insights from data after the Indexer has wrapped up execution.
-`--config` (required): The path to the configuration file. This is required to load the data and model settings.
31
-
32
-
-`--root` (optional): The data project root directory, including the config files (YML, JSON, or .env). Defaults to the current directory.
30
+
-`--root` (optional): Path to the project directory that contains the config file (settings.yaml). Defaults to the current directory.
33
31
34
32
-`--domain` (optional): The domain related to your input data, such as 'space science', 'microbiology', or 'environmental news'. If left empty, the domain will be inferred from the input data.
0 commit comments