You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: text_2_sql/README.md
+40-20Lines changed: 40 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -39,6 +39,7 @@ Three different iterations are presented and code provided for:
39
39
-**Iteration 2:** Injection of a brief description of the available entities is injected into the prompt. This limits the number of tokens used and avoids filling the prompt with confusing schema information.
40
40
-**Iteration 3:** Indexing the entity definitions in a vector database, such as AI Search, and querying it to retrieve the most relevant entities for the key terms from the query.
41
41
-**Iteration 4:** Keeping an index of commonly asked questions and which schema / SQL query they resolve to - this index is generated by the LLM when it encounters a question that has not been previously asked. Additionally, indexing the entity definitions in a vector database, such as AI Search _(same as Iteration 3)_. First querying this index to see if a similar SQL query can be obtained _(if high probability of exact SQL query match, the results can be pre-fetched)_. If not, falling back to the schema index, and querying it to retrieve the most relevant entities for the key terms from the query.
42
+
-**Iteration 5:** Moves the Iteration 4 approach into a multi-agent approach for improved reasoning and query generation. With separation into agents, different agents can focus on one task only, and provide a better overall flow and response quality. See more details below.
42
43
43
44
All approaches limit the number of tokens used and avoids filling the prompt with confusing schema information.
44
45
@@ -48,15 +49,17 @@ To improve the scalability and accuracy in SQL Query generation, the entity rela
48
49
49
50
For the query cache enabled approach, AI Search is used as a vector based cache, but any other cache that supports vector queries could be used, such as Redis.
50
51
51
-
### Full Logical Flow for Vector Based Approach
52
+
### Full Logical Flow for Agentic Vector Based Approach
52
53
53
-
The following diagram shows the logical flow within the Vector Based plugin. In an ideal scenario, the questions will follow the _Pre-Fetched Cache Results Path** which leads to the quickest answer generation. In cases where the question is not known, the plugin will fall back the other paths accordingly and generate the SQL query using the LLM. The cache is then updated with the newly generated query and schemas.
54
+
The following diagram shows the logical flow within mutlti agent system. In an ideal scenario, the questions will follow the _Pre-Fetched Cache Results Path** which leads to the quickest answer generation. In cases where the question is not known, the group chat selector will fall back to the other agents accordingly and generate the SQL query using the LLMs. The cache is then updated with the newly generated query and schemas.
55
+
56
+
Unlike the previous approaches, **gpt4o-mini** can be used as each agent's prompt is small and focuses on a single simple task.
54
57
55
58
As the query cache is shared between users (no data is stored in the cache), a new user can benefit from the pre-mapped question and schema resolution in the index.
56
59
57
60
**Database results were deliberately not stored within the cache. Storing them would have removed one of the key benefits of the Text2SQL plugin, the ability to get near-real time information inside a RAG application. Instead, the query is stored so that the most-recent results can be obtained quickly. Additionally, this retains the ability to apply Row or Column Level Security.**
58
61
59
-

62
+

60
63
61
64
### Caching Strategy
62
65
@@ -68,20 +71,22 @@ The cache strategy implementation is a simple way to prove that the system works
68
71
-**Always update:** Always add all questions into the cache when they are asked. The sample code in the repository currently implements this approach, but this could lead to poor SQL queries reaching the cache. One of the other caching strategies would be better production version.
69
72
70
73
### Comparison of Iterations
71
-
|| Common Text2SQL Approach | Prompt Based Multi-Shot Text2SQL Approach | Vector Based Multi-Shot Text2SQL Approach | Vector Based Multi-Shot Text2SQL Approach With Query Cache |
72
-
|-|-|-|-|-|
74
+
|| Common Text2SQL Approach | Prompt Based Multi-Shot Text2SQL Approach | Vector Based Multi-Shot Text2SQL Approach | Vector Based Multi-Shot Text2SQL Approach With Query Cache | Agentic Vector Based Multi-Shot Text2SQL Approach With Query Cache |
75
+
|-|-|-|-|-|-|
73
76
|**Advantages** | Fast for a limited number of entities. | Significant reduction in token usage. | Significant reduction in token usage. | Significant reduction in token usage.
74
-
|||| Scales well to multiple entities. | Scales well to multiple entities. |
75
-
|||| Uses a vector approach to detect the best fitting entity which is faster than using an LLM. Matching is offloaded to AI Search. | Uses a vector approach to detect the best fitting entity which is faster than using an LLM. Matching is offloaded to AI Search. |
76
-
||||| Significantly faster to answer similar questions as best fitting entity detection is skipped. Observed tests resulted in almost half the time for final output compared to the previous iteration. |
77
-
||||| Significantly faster execution time for known questions. Total execution time can be reduced by skipping the query generation step. |
78
-
|**Disadvantages**| Slows down significantly as the number of entities increases. | Uses LLM to detect the best fitting entity which is slow compared to a vector approach. | AI Search adds additional cost to the solution. | Slower than other approaches for the first time a question with no similar questions in the cache is asked. |
79
-
|| Consumes a significant number of tokens as number of entities increases. | As number of entities increases, token usage will grow but at a lesser rate than Iteration 1. || AI Search adds additional cost to the solution. |
80
-
|| LLM struggled to differentiate which table to choose with the large amount of information passed. |||
77
+
|||| Scales well to multiple entities. | Scales well to multiple entities. | Scales well to multiple entities with small agents. |
78
+
|||| Uses a vector approach to detect the best fitting entity which is faster than using an LLM. Matching is offloaded to AI Search. | Uses a vector approach to detect the best fitting entity which is faster than using an LLM. Matching is offloaded to AI Search. | Uses a vector approach to detect the best fitting entity which is faster than using an LLM. Matching is offloaded to AI Search. |
79
+
||||| Significantly faster to answer similar questions as best fitting entity detection is skipped. Observed tests resulted in almost half the time for final output compared to the previous iteration. | Significantly faster to answer similar questions as best fitting entity detection is skipped. Observed tests resulted in almost half the time for final output compared to the previous iteration. |
80
+
||||| Significantly faster execution time for known questions. Total execution time can be reduced by skipping the query generation step. | Significantly faster execution time for known questions. Total execution time can be reduced by skipping the query generation step. |
81
+
|||||| Instruction following and accuracy is improved by decomposing the task into smaller tasks. |
82
+
|||||| Handles query decomposition for complex questions. |
83
+
|**Disadvantages**| Slows down significantly as the number of entities increases. | Uses LLM to detect the best fitting entity which is slow compared to a vector approach. | AI Search adds additional cost to the solution. | Slower than other approaches for the first time a question with no similar questions in the cache is asked. | Slower than other approaches for the first time a question with no similar questions in the cache is asked. |
84
+
|| Consumes a significant number of tokens as number of entities increases. | As number of entities increases, token usage will grow but at a lesser rate than Iteration 1. || AI Search adds additional cost to the solution. | AI Search and multiple agents adds additional cost to the solution. |
85
+
|| LLM struggled to differentiate which table to choose with the large amount of information passed. ||||
### Complete Execution Time Comparison for Approaches
87
92
@@ -247,13 +252,28 @@ The following environmental variables control the behaviour of the Vector Based
247
252
-**Text2Sql__UseQueryCache** - controls whether the query cached index is checked before using the standard schema index.
248
253
-**Text2Sql__PreRunQueryCache** - controls whether the top result from the query cache index (if enabled) is pre-fetched against the data source to include the results in the prompt.
249
254
255
+
## Agentic Vector Based Approach (Iteration 5)
256
+
257
+
This approach builds on the the Vector Based SQL Plugin approach, but adds a agentic approach to the solution.
258
+
259
+
This agentic system contains the following agents:
260
+
261
+
-**Query Cache Agent:** Responsible for checking the cache for previously asked questions.
262
+
-**Query Decomposition Agent:** Responsible for decomposing complex questions, into sub questions that can be answered with SQL.
263
+
-**Schema Selection Agent:** Responsible for extracting key terms from the question and checking the index store for the queries.
264
+
-**SQL Query Generation Agent:** Responsible for using the previously extracted schemas and generated SQL queries to answer the question. This agent can request more schemas if needed. This agent will run the query.
265
+
-**SQL Query Verification Agent:** Responsible for verifying that the SQL query and results question will answer the question.
266
+
-**Answer Generation Agent:** Responsible for taking the database results and generating the final answer for the user.
267
+
268
+
The combination of this agent allows the system to answer complex questions, whilst staying under the token limits when including the database schemas. The query cache ensures that previously asked questions, can be answered quickly to avoid degrading user experience.
269
+
250
270
## Code Availability
251
271
252
-
|| Common Text2SQL Approach | Prompt Based Multi-Shot Text2SQL Approach | Vector Based Multi-Shot Text2SQL Approach | Vector Based Multi-Shot Text2SQL Approach With Query Cache |
Very much still work in progress, more documentation coming soon.
3
+
The implementation is written for [AutoGen](https://github.com/microsoft/autogen) in Python, although it can easily be adapted for C#.
4
+
5
+
**Still work in progress, expect a lot of updates shortly**
6
+
7
+
**The provided AutoGen code only implements Iterations 5 (Agentic Approach)**
8
+
9
+
## Full Logical Flow for Agentic Vector Based Approach
10
+
11
+
The following diagram shows the logical flow within mutlti agent system. In an ideal scenario, the questions will follow the _Pre-Fetched Cache Results Path** which leads to the quickest answer generation. In cases where the question is not known, the group chat selector will fall back to the other agents accordingly and generate the SQL query using the LLMs. The cache is then updated with the newly generated query and schemas.
12
+
13
+
Unlike the previous approaches, **gpt4o-mini** can be used as each agent's prompt is small and focuses on a single simple task.
14
+
15
+
As the query cache is shared between users (no data is stored in the cache), a new user can benefit from the pre-mapped question and schema resolution in the index. There are multiple possible strategies for updating the query cache, see the possible options in the Text2SQL README.
16
+
17
+
**Database results were deliberately not stored within the cache. Storing them would have removed one of the key benefits of the Text2SQL plugin, the ability to get near-real time information inside a RAG application. Instead, the query is stored so that the most-recent results can be obtained quickly. Additionally, this retains the ability to apply Row or Column Level Security.**
18
+
19
+

20
+
21
+
## Provided Notebooks & Scripts
22
+
23
+
-`./agentic_text_2_sql.ipynb` provides example of how to utilise the Agentic Vector Based Text2SQL approach to query the database. The query cache plugin will be enabled or disabled depending on the environmental parameters.
24
+
25
+
## Agents
26
+
27
+
This approach builds on the the Vector Based SQL Plugin approach, but adds a agentic approach to the solution.
28
+
29
+
This agentic system contains the following agents:
30
+
31
+
-**Query Cache Agent:** Responsible for checking the cache for previously asked questions.
32
+
-**Query Decomposition Agent:** Responsible for decomposing complex questions, into sub questions that can be answered with SQL.
33
+
-**Schema Selection Agent:** Responsible for extracting key terms from the question and checking the index store for the queries.
34
+
-**SQL Query Generation Agent:** Responsible for using the previously extracted schemas and generated SQL queries to answer the question. This agent can request more schemas if needed. This agent will run the query.
35
+
-**SQL Query Verification Agent:** Responsible for verifying that the SQL query and results question will answer the question.
36
+
-**Answer Generation Agent:** Responsible for taking the database results and generating the final answer for the user.
37
+
38
+
The combination of this agent allows the system to answer complex questions, whilst staying under the token limits when including the database schemas. The query cache ensures that previously asked questions, can be answered quickly to avoid degrading user experience.
39
+
40
+
All agents can be found in `/agents/`.
41
+
42
+
## agentic_text_2_sql.py
43
+
44
+
This is the main entry point for the agentic system. In here, the `Selector Group Chat` is configured with the termination conditions to orchestrate the agents within the system.
45
+
46
+
A customer transition selector is used to automatically transition between agents dependent on the last one that was used. In some cases, this choice is delegated to an LLM to decide on the most appropriate action. This mixed approach allows for speed when needed (e.g. always calling Query Cache Agent first), but will allow the system to react dynamically to the events.
47
+
48
+
## Utils
49
+
50
+
### ai-search.py
51
+
52
+
This util file contains helper functions for interacting with AI Search.
53
+
54
+
### llm_agent_creator.py
55
+
56
+
This util file creates the agents in the AutoGen framework based on the configuration files.
57
+
58
+
### models.py
59
+
60
+
This util file creates the model connections to Azure OpenAI for the agents.
61
+
62
+
### sql.py
63
+
64
+
#### get_entity_schema()
65
+
66
+
This method is called by the AutoGen framework automatically, when instructed to do so by the LLM, to search the AI Search instance with the given text. The LLM is able to pass the key terms from the user query, and retrieve a ranked list of the most suitable entities to answer the question.
67
+
68
+
The search text passed is vectorised against the entity level **Description** columns. A hybrid Semantic Reranking search is applied against the **EntityName**, **Entity**, **Columns/Name** fields.
69
+
70
+
#### fetch_queries_from_cache()
71
+
72
+
The vector based with query cache uses the `fetch_queries_from_cache()` method to fetch the most relevant previous query and injects it into the prompt before the initial LLM call. The use of Auto-Function Calling here is avoided to reduce the response time as the cache index will always be used first.
73
+
74
+
If the score of the top result is higher than the defined threshold, the query will be executed against the target data source and the results included in the prompt. This allows us to prompt the LLM to evaluated whether it can use these results to answer the question, **without further SQL Query generation** to speed up the process.
75
+
76
+
#### run_sql_query()
77
+
78
+
This method is called by the AutoGen framework automatically, when instructed to do so by the LLM, to run a SQL query against the given database. It returns a JSON string containing a row wise dump of the results returned. These results are then interpreted to answer the question.
79
+
80
+
Additionally, if any of the cache functionality is enabled, this method will update the query cache index based on the SQL query run, and the schemas used in execution.
0 commit comments