Skip to content

Commit e0cce31

Browse files
authored
Graphrag config (#2119)
* Add load_config to graphrag-common package.
1 parent ae1f5e1 commit e0cce31

File tree

65 files changed

+786
-736
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+786
-736
lines changed

docs/examples_notebooks/api_overview.ipynb

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
"source": [
1717
"## API Overview\n",
1818
"\n",
19-
"This notebook provides a demonstration of how to interact with graphrag as a library using the API as opposed to the CLI. Note that graphrag's CLI actually connects to the library through this API for all operations. "
19+
"This notebook provides a demonstration of how to interact with graphrag as a library using the API as opposed to the CLI. Note that graphrag's CLI actually connects to the library through this API for all operations.\n"
2020
]
2121
},
2222
{
@@ -48,16 +48,17 @@
4848
"metadata": {},
4949
"source": [
5050
"## Prerequisite\n",
51+
"\n",
5152
"As a prerequisite to all API operations, a `GraphRagConfig` object is required. It is the primary means to control the behavior of graphrag and can be instantiated from a `settings.yaml` configuration file.\n",
5253
"\n",
53-
"Please refer to the [CLI docs](https://microsoft.github.io/graphrag/cli/#init) for more detailed information on how to generate the `settings.yaml` file."
54+
"Please refer to the [CLI docs](https://microsoft.github.io/graphrag/cli/#init) for more detailed information on how to generate the `settings.yaml` file.\n"
5455
]
5556
},
5657
{
5758
"cell_type": "markdown",
5859
"metadata": {},
5960
"source": [
60-
"### Generate a `GraphRagConfig` object"
61+
"### Generate a `GraphRagConfig` object\n"
6162
]
6263
},
6364
{
@@ -77,14 +78,14 @@
7778
"source": [
7879
"## Indexing API\n",
7980
"\n",
80-
"*Indexing* is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats."
81+
"_Indexing_ is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats.\n"
8182
]
8283
},
8384
{
8485
"cell_type": "markdown",
8586
"metadata": {},
8687
"source": [
87-
"## Build an index"
88+
"## Build an index\n"
8889
]
8990
},
9091
{
@@ -107,7 +108,7 @@
107108
"source": [
108109
"## Query an index\n",
109110
"\n",
110-
"To query an index, several index files must first be read into memory and passed to the query API. "
111+
"To query an index, several index files must first be read into memory and passed to the query API.\n"
111112
]
112113
},
113114
{
@@ -138,7 +139,7 @@
138139
"cell_type": "markdown",
139140
"metadata": {},
140141
"source": [
141-
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response."
142+
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response.\n"
142143
]
143144
},
144145
{
@@ -154,7 +155,7 @@
154155
"cell_type": "markdown",
155156
"metadata": {},
156157
"source": [
157-
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model)."
158+
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model).\n"
158159
]
159160
},
160161
{

docs/examples_notebooks/input_documents.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
"\n",
1919
"Newer versions of GraphRAG let you submit a dataframe directly instead of running through the input processing step. This notebook demonstrates with regular or update runs.\n",
2020
"\n",
21-
"If performing an update, the assumption is that your dataframe contains only the new documents to add to the index."
21+
"If performing an update, the assumption is that your dataframe contains only the new documents to add to the index.\n"
2222
]
2323
},
2424
{
@@ -54,7 +54,7 @@
5454
"cell_type": "markdown",
5555
"metadata": {},
5656
"source": [
57-
"### Generate a `GraphRagConfig` object"
57+
"### Generate a `GraphRagConfig` object\n"
5858
]
5959
},
6060
{
@@ -72,14 +72,14 @@
7272
"source": [
7373
"## Indexing API\n",
7474
"\n",
75-
"*Indexing* is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats."
75+
"_Indexing_ is the process of ingesting raw text data and constructing a knowledge graph. GraphRAG currently supports plaintext (`.txt`) and `.csv` file formats.\n"
7676
]
7777
},
7878
{
7979
"cell_type": "markdown",
8080
"metadata": {},
8181
"source": [
82-
"## Build an index"
82+
"## Build an index\n"
8383
]
8484
},
8585
{
@@ -109,7 +109,7 @@
109109
"source": [
110110
"## Query an index\n",
111111
"\n",
112-
"To query an index, several index files must first be read into memory and passed to the query API. "
112+
"To query an index, several index files must first be read into memory and passed to the query API.\n"
113113
]
114114
},
115115
{
@@ -140,7 +140,7 @@
140140
"cell_type": "markdown",
141141
"metadata": {},
142142
"source": [
143-
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response."
143+
"The response object is the official reponse from graphrag while the context object holds various metadata regarding the querying process used to obtain the final response.\n"
144144
]
145145
},
146146
{
@@ -156,7 +156,7 @@
156156
"cell_type": "markdown",
157157
"metadata": {},
158158
"source": [
159-
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model)."
159+
"Digging into the context a bit more provides users with extremely granular information such as what sources of data (down to the level of text chunks) were ultimately retrieved and used as part of the context sent to the LLM model).\n"
160160
]
161161
},
162162
{

docs/get_started.md

Lines changed: 44 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -10,40 +10,59 @@ The following is a simple end-to-end example for using GraphRAG on the command l
1010

1111
It shows how to use the system to index some text, and then use the indexed data to answer questions about the documents.
1212

13-
# Install GraphRAG
13+
## Install GraphRAG
14+
15+
To get started, create a project space and python virtual environment to install `graphrag`.
16+
17+
### Create Project Space
1418

1519
```bash
16-
pip install graphrag
20+
mkdir graphrag_quickstart
21+
cd graphrag_quickstart
22+
python -m venv .venv
1723
```
24+
### Activate Python Virtual Environment - Unix/MacOS
1825

19-
# Running the Indexer
20-
We need to set up a data project and some initial configuration. First let's get a sample dataset ready:
26+
```bash
27+
source .venv/bin/activate
28+
```
2129

22-
```sh
23-
mkdir -p ./christmas/input
30+
### Activate Python Virtual Environment - Windows
31+
32+
```bash
33+
.venv\Scripts\activate
2434
```
2535

26-
Get a copy of A Christmas Carol by Charles Dickens from a trusted source:
36+
### Install GraphRAG
2737

28-
```sh
29-
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./christmas/input/book.txt
38+
```bash
39+
python -m pip install graphrag
3040
```
3141

32-
## Set Up Your Workspace Variables
42+
### Initialize GraphRAG
3343

3444
To initialize your workspace, first run the `graphrag init` command.
35-
Since we have already configured a directory named `./christmas` in the previous step, run the following command:
3645

3746
```sh
38-
graphrag init --root ./christmas
47+
graphrag init
3948
```
4049

41-
This will create two files: `.env` and `settings.yaml` in the `./christmas` directory.
50+
This will create two files, `.env` and `settings.yaml`, and a directory `input`, in the current directory.
4251

52+
- `input` Location of text files to process with `graphrag`.
4353
- `.env` contains the environment variables required to run the GraphRAG pipeline. If you inspect the file, you'll see a single environment variable defined,
4454
`GRAPHRAG_API_KEY=<API_KEY>`. Replace `<API_KEY>` with your own OpenAI or Azure API key.
4555
- `settings.yaml` contains the settings for the pipeline. You can modify this file to change the settings for the pipeline.
46-
<br/>
56+
57+
### Download Sample Text
58+
59+
Get a copy of A Christmas Carol by Charles Dickens from a trusted source:
60+
61+
```sh
62+
curl https://www.gutenberg.org/cache/epub/24022/pg24022.txt -o ./input/book.txt
63+
```
64+
65+
## Set Up Workspace Variables
4766

4867
### Using OpenAI
4968

@@ -56,13 +75,14 @@ In addition to setting your API key, Azure OpenAI users should set the variables
5675
```yaml
5776
type: chat
5877
model_provider: azure
78+
model: gpt-4.1
79+
deployment_name: <AZURE_DEPLOYMENT_NAME>
5980
api_base: https://<instance>.openai.azure.com
6081
api_version: 2024-02-15-preview # You can customize this for other versions
6182
```
6283
63-
Most people tend to name their deployments the same as their model - if yours are different, add the `deployment_name` as well.
64-
6584
#### Using Managed Auth on Azure
85+
6686
To use managed auth, edit the auth_type in your model config and *remove* the api_key line:
6787
6888
```yaml
@@ -71,38 +91,34 @@ auth_type: azure_managed_identity # Default auth_type is is api_key
7191
7292
You will also need to login with [az login](https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli) and select the subscription with your endpoint.
7393
74-
## Running the Indexing pipeline
94+
## Index
7595
76-
Now we're ready to run the pipeline!
96+
Now we're ready to index!
7797
7898
```sh
79-
graphrag index --root ./christmas
99+
graphrag index
80100
```
81101

82102
![pipeline executing from the CLI](img/pipeline-running.png)
83103

84-
This process will usually take a few minutes to run. Once the pipeline is complete, you should see a new folder called `./christmas/output` with a series of parquet files.
104+
This process will usually take a few minutes to run. Once the pipeline is complete, you should see a new folder called `./output` with a series of parquet files.
85105

86-
# Using the Query Engine
106+
# Query
87107

88108
Now let's ask some questions using this dataset.
89109

90110
Here is an example using Global search to ask a high-level question:
91111

92112
```sh
93-
graphrag query \
94-
--root ./christmas \
95-
--method global \
96-
--query "What are the top themes in this story?"
113+
graphrag query "What are the top themes in this story?"
97114
```
98115

99116
Here is an example using Local search to ask a more specific question about a particular character:
100117

101118
```sh
102119
graphrag query \
103-
--root ./christmas \
104-
--method local \
105-
--query "Who is Scrooge and what are his main relationships?"
120+
"Who is Scrooge and what are his main relationships?" \
121+
--method local
106122
```
107123

108124
Please refer to [Query Engine](query/overview.md) docs for detailed information about how to leverage our Local and Global search mechanisms for extracting meaningful insights from data after the Indexer has wrapped up execution.

docs/index/byog.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,4 +65,4 @@ Putting it all together:
6565

6666
- `output`: Create an output folder and put your entities and relationships (and optionally text_units) parquet files in it.
6767
- Update your config as noted above to only run the workflows subset you need.
68-
- Run `graphrag index --root <your project root>`
68+
- Run `graphrag index --root <your_project_root>`

docs/prompt_tuning/auto_prompt_tuning.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,16 +20,14 @@ Before running auto tuning, ensure you have already initialized your workspace w
2020
You can run the main script from the command line with various options:
2121

2222
```bash
23-
graphrag prompt-tune [--root ROOT] [--config CONFIG] [--domain DOMAIN] [--selection-method METHOD] [--limit LIMIT] [--language LANGUAGE] \
23+
graphrag prompt-tune [--root ROOT] [--domain DOMAIN] [--selection-method METHOD] [--limit LIMIT] [--language LANGUAGE] \
2424
[--max-tokens MAX_TOKENS] [--chunk-size CHUNK_SIZE] [--n-subset-max N_SUBSET_MAX] [--k K] \
2525
[--min-examples-required MIN_EXAMPLES_REQUIRED] [--discover-entity-types] [--output OUTPUT]
2626
```
2727

2828
## Command-Line Options
2929

30-
- `--config` (required): The path to the configuration file. This is required to load the data and model settings.
31-
32-
- `--root` (optional): The data project root directory, including the config files (YML, JSON, or .env). Defaults to the current directory.
30+
- `--root` (optional): Path to the project directory that contains the config file (settings.yaml). Defaults to the current directory.
3331

3432
- `--domain` (optional): The domain related to your input data, such as 'space science', 'microbiology', or 'environmental news'. If left empty, the domain will be inferred from the input data.
3533

@@ -56,15 +54,15 @@ graphrag prompt-tune [--root ROOT] [--config CONFIG] [--domain DOMAIN] [--selec
5654
## Example Usage
5755

5856
```bash
59-
python -m graphrag prompt-tune --root /path/to/project --config /path/to/settings.yaml --domain "environmental news" \
57+
python -m graphrag prompt-tune --root /path/to/project --domain "environmental news" \
6058
--selection-method random --limit 10 --language English --max-tokens 2048 --chunk-size 256 --min-examples-required 3 \
6159
--no-discover-entity-types --output /path/to/output
6260
```
6361

6462
or, with minimal configuration (suggested):
6563

6664
```bash
67-
python -m graphrag prompt-tune --root /path/to/project --config /path/to/settings.yaml --no-discover-entity-types
65+
python -m graphrag prompt-tune --root /path/to/project --no-discover-entity-types
6866
```
6967

7068
## Document Selection Methods

packages/graphrag-common/README.md

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,4 +48,72 @@ single2 = factory.create("some_other_strategy", {"value": "ignored"})
4848
assert single1 is single2
4949
assert single1.get_value() == "singleton"
5050
assert single2.get_value() == "singleton"
51+
```
52+
53+
## Config module
54+
55+
```python
56+
from pydantic import BaseModel, Field
57+
from graphrag_common.config import load_config
58+
59+
from pathlib import Path
60+
61+
class Logging(BaseModel):
62+
"""Test nested model."""
63+
64+
directory: str = Field(default="output/logs")
65+
filename: str = Field(default="logs.txt")
66+
67+
class Config(BaseModel):
68+
"""Test configuration model."""
69+
70+
name: str = Field(description="Name field.")
71+
logging: Logging = Field(description="Nested model field.")
72+
73+
# Basic - by default:
74+
# - searches for Path.cwd() / settings.[yaml|yml|json]
75+
# - sets the CWD to the directory containing the config file.
76+
# so if no custom config path is provided than CWD remains unchanged.
77+
# - loads config_directory/.env file
78+
# - parses ${env} in the config file
79+
config = load_config(Config)
80+
81+
# Custom file location
82+
config = load_config(Config, "path_to_config_filename_or_directory_containing_settings.[yaml|yml|json]")
83+
84+
# Using a custom file extension with
85+
# custom config parser (str) -> dict[str, Any]
86+
config = load_config(
87+
config_initializer=Config,
88+
config_path="config.toml",
89+
config_parser=lambda contents: toml.loads(contents) # Needs toml pypi package
90+
)
91+
92+
# With overrides - provided values override whats in the config file
93+
# Only overrides what is specified - recursively merges settings.
94+
config = load_config(
95+
config_initializer=Config,
96+
overrides={
97+
"name": "some name",
98+
"logging": {
99+
"filename": "my_logs.txt"
100+
}
101+
},
102+
)
103+
104+
# By default, sets CWD to directory containing config file
105+
# So custom config paths will change the CWD.
106+
config = load_config(
107+
config_initializer=Config,
108+
config_path="some/path/to/config.yaml",
109+
set_cwd=True # default
110+
)
111+
112+
# now cwd == some/path/to
113+
assert Path.cwd() == "some/path/to"
114+
115+
# And now throughout the codebase resolving relative paths in config
116+
# will resolve relative to the config directory
117+
Path(config.logging.directory) == "some/path/to/output/logs"
118+
51119
```

0 commit comments

Comments
 (0)