Skip to content

Commit 0b6df36

Browse files
committed
Add models page
1 parent b7b2b56 commit 0b6df36

File tree

4 files changed

+105
-7
lines changed

4 files changed

+105
-7
lines changed

docs/config/init.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,4 +29,4 @@ The `init` command will create the following files in the specified directory:
2929

3030
## Next Steps
3131

32-
After initializing your workspace, you can either run the [Prompt Tuning](../prompt_tuning/auto_prompt_tuning.md) command to adapt the prompts to your data or even start running the [Indexing Pipeline](../index/overview.md) to index your data. For more information on configuring GraphRAG, see the [Configuration](overview.md) documentation.
32+
After initializing your workspace, you can either run the [Prompt Tuning](../prompt_tuning/auto_prompt_tuning.md) command to adapt the prompts to your data or even start running the [Indexing Pipeline](../index/overview.md) to index your data. For more information on configuration options available, see the [yaml details page](yaml.md).

docs/config/models.md

Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
# Language Model Selection and Overriding
2+
3+
This page contains information on selecting a model to use and options to supply your own model for GraphRAG. Note that this is not a guide to finding the right model for your use case.
4+
5+
## Default Model Support
6+
7+
GraphRAG was built and tested using OpenAI models, so this is the default model set we support. This is not intended to be a limiter or statement of quality or fitness for your use case, only that it's the set we are most familiar with for prompting, tuning, and debugging.
8+
9+
GraphRAG also utilizes a language model wrapper library used by several projects within our team, called fnllm. fnllm provides two important functions for GraphRAG: rate limiting configuration to help us maximize throughput for large indexing jobs, and robust caching of API calls to minimize consumption on repeated indexes for testing, experimentation, or incremental ingest. fnllm uses the OpenAI Python SDK under the covers, so OpenAI-compliant endpoints are a base requirement out-of-the-box.
10+
11+
## Model Selection Considerations
12+
13+
GraphRAG has been most thoroughly tested with the gpt-4 series of models from OpenAI, including gpt-4 gpt-4-turbo, gpt-4o, and gpt-4o-mini. Our [arXiv paper](https://arxiv.org/abs/2404.16130), for example, performed quality evaluation using gpt-4-turbo.
14+
15+
Versions of GraphRAG before 2.2.0 made extensive use of `max_tokens` and `logit_bias` to control generated response length or content. The introduction of the o-series of models added new, non-compatible parameters because these models include a reasoning component that has different consumption patterns and response generation attributes than non-reasoning models. GraphRAG 2.2.0 now supports these models, but there are important differences that need to be understood before you switch.
16+
17+
- Previously, GraphRAG used `max_tokens` to limit responses in a few locations. This is done so that we can have predictable content sizes when building downstream context windows for summarization. We have now switched from using `max_tokens` to use a prompted approach, which is working well in our tests. We suggest using `max_tokens` in your language model config only for budgetary reasons if you want to limit consumption, and not for expected response length control. We now also support the o-series equivalent `max_completion_tokens`, but if you use this keep in mind that there may be some unknown fixed reasoning consumption amount in addition to the response tokens, so it is not a good technique for response control.
18+
- Previously, GraphRAG used a combination of `max_tokens` and `logit_bias` to strictly control a binary yes/no question during gleanings. This is not possible with reasoning models, so again we have switched to a prompted approach. Our tests with gpt-4o, gpt-4o-mini, and o1 show that this works consistently, but could have issues if you have an older or smaller model.
19+
- The o-series models are much slower and more expensive. It may be useful to use an asymmetric approach to model use in your config: you can define as many models as you like in the `models` block of your settings.yaml and reference them by key for every workflow that requires a language model. You could use gpt-4o for indexing and o1 for query, for example. Experiment to find the right balance of cost, speed, and quality for your use case.
20+
21+
Example config with asymmetric model use:
22+
23+
```yaml
24+
models:
25+
default_chat_model:
26+
api_key: ${GRAPHRAG_API_KEY}
27+
type: openai_chat
28+
auth_type: api_key
29+
model: gpt-4o
30+
model_supports_json: true
31+
query_chat_model:
32+
api_key: ${GRAPHRAG_API_KEY}
33+
type: openai_chat
34+
auth_type: api_key
35+
model: o1
36+
model_supports_json: true
37+
38+
...
39+
40+
extract_graph:
41+
model_id: default_chat_model
42+
prompt: "prompts/extract_graph.txt"
43+
entity_types: [organization,person,geo,event]
44+
max_gleanings: 1
45+
46+
...
47+
48+
49+
global_search:
50+
chat_model_id: query_chat_model
51+
map_prompt: "prompts/global_search_map_system_prompt.txt"
52+
reduce_prompt: "prompts/global_search_reduce_system_prompt.txt"
53+
knowledge_prompt: "prompts/global_search_knowledge_system_prompt.txt"
54+
```
55+
56+
Another option would be to avoid using a language model at all for the graph extraction, instead using the `fast` [indexing method](../index/methods.md) that uses NLP for portions of the indexing phase in lieu of LLM APIs.
57+
58+
## Using Non-OpenAI Models
59+
60+
As noted above, our primary experience and focus has been on OpenAI models, so this is what is supported out-of-the-box. Many users have requested support for additional model types, but it's out of the scope of our research to handle the many models available today. There are two approaches you can use to connect to a non-OpenAI model:
61+
62+
### Proxy APIs
63+
64+
Many users have used platforms such as [ollama](https://ollama.com/) to proxy the underlying model HTTP calls to a different model provider. This seems to work reasonably well, but we frequently see issues with malformed responses (especially JSON), so if you do this please understand that your model needs to reliably return the specific response formats that GraphRAG expects. If you're having trouble with a model, you may need to try prompting to coax the format, or intercepting the response within your proxy to try and handle malformed responses.
65+
66+
### Model Protocol
67+
68+
As of GraphRAG 2.0.0, we support model injection through the use of a standard chat and embedding Protocol and an accompanying ModelFactory that you can use to register your model implementation. This is not supported with the CLI, so you'll need to use GraphRAG as a library.
69+
70+
- Our Protocol is [defined here](https://github.com/microsoft/graphrag/blob/main/graphrag/language_model/protocol/base.py)
71+
- Our base implementation, which wraps fnllm, [is here](https://github.com/microsoft/graphrag/blob/main/graphrag/language_model/providers/fnllm/models.py)
72+
- We have a simple mock implementation in our tests that you can [reference here](https://github.com/microsoft/graphrag/blob/main/tests/mock_provider.py)
73+
74+
Once you have a model implementation, you need to register it with our ModelFactory:
75+
76+
```python
77+
class MyCustomModel:
78+
...
79+
# implementation
80+
81+
# elsewhere...
82+
ModelFactory.register_chat("my-custom-chat-model", lambda **kwargs: MyCustomModel(**kwargs))
83+
```
84+
85+
Then in your config you can reference the type name you used:
86+
87+
```yaml
88+
models:
89+
default_chat_model:
90+
type: my-custom-chat-model
91+
92+
93+
extract_graph:
94+
model_id: default_chat_model
95+
prompt: "prompts/extract_graph.txt"
96+
entity_types: [organization,person,geo,event]
97+
max_gleanings: 1
98+
```

docs/config/overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ The GraphRAG system is highly configurable. This page provides an overview of th
44

55
## Default Configuration Mode
66

7-
The default configuration mode is the simplest way to get started with the GraphRAG system. It is designed to work out-of-the-box with minimal configuration. The primary configuration sections for the Indexing Engine pipelines are described below. The main ways to set up GraphRAG in Default Configuration mode are via:
7+
The default configuration mode is the simplest way to get started with the GraphRAG system. It is designed to work out-of-the-box with minimal configuration. The main ways to set up GraphRAG in Default Configuration mode are via:
88

9-
- [Init command](init.md) (recommended)
10-
- [Using YAML for deeper control](yaml.md)
9+
- [Init command](init.md) (recommended first step)
10+
- [Edit settings.yaml for deeper control](yaml.md)
1111
- [Purely using environment variables](env_vars.md) (not recommended)

mkdocs.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,8 @@ nav:
2727
- Development Guide: developing.md
2828
- Indexing:
2929
- Overview: "index/overview.md"
30-
- Architecture: "index/architecture.md"
3130
- Dataflow: "index/default_dataflow.md"
31+
- Methods: "index/methods.md"
3232
- Inputs: "index/inputs.md"
3333
- Outputs: "index/outputs.md"
3434
- Prompt Tuning:
@@ -49,8 +49,8 @@ nav:
4949
- Configuration:
5050
- Overview: "config/overview.md"
5151
- Init Command: "config/init.md"
52-
- Using YAML: "config/yaml.md"
53-
- Using Env Vars: "config/env_vars.md"
52+
- Detailed Configuration: "config/yaml.md"
53+
- Language Model Selection: "config/models.md"
5454
- CLI: "cli.md"
5555
- Extras:
5656
- Microsoft Research Blog: "blog_posts.md"

0 commit comments

Comments
 (0)