You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AIService__AzureSearchOptions__Identity__FQName=<fully qualified name of the identity if using user assigned identity>
13
13
StorageAccount__FQEndpoint=<Fully qualified endpoint in form ResourceId=resourceId if using identity based connections>
14
-
StorageAccount__ConnectionString=<connectionString if using non managed identity>
14
+
StorageAccount__ConnectionString=<connectionString if using non managed identity. In format: DefaultEndpointsProtocol=https;AccountName=<STG NAME>;AccountKey=<ACCOUNT KEY>;EndpointSuffix=core.windows.net>
Copy file name to clipboardExpand all lines: deploy_ai_search/README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
The associated scripts in this portion of the repository contains pre-built scripts to deploy the skillset with Azure Document Intelligence.
4
4
5
-
## Steps for Rag Documents Index Deployment
5
+
## Steps for Rag Documents Index Deployment (For Unstructured RAG)
6
6
7
7
1. Update `.env` file with the associated values. Not all values are required dependent on whether you are using System / User Assigned Identities or a Key based authentication.
8
8
2. Adjust `rag_documents.py` with any changes to the index / indexer. The `get_skills()` method implements the skills pipeline. Make any adjustments here in the skills needed to enrich the data source.
@@ -13,7 +13,7 @@ The associated scripts in this portion of the repository contains pre-built scri
13
13
-`rebuild`. Whether to delete and rebuild the index.
14
14
-`suffix`. Optional parameter that will apply a suffix onto the deployed index and indexer. This is useful if you want deploy a test version, before overwriting the main version.
15
15
16
-
## Steps for Text2SQL Index Deployment
16
+
## Steps for Text2SQL Index Deployment (For Structured RAG)
17
17
18
18
### Schema Store Index
19
19
@@ -29,7 +29,7 @@ The associated scripts in this portion of the repository contains pre-built scri
29
29
### Query Cache Index
30
30
31
31
1. Update `.env` file with the associated values. Not all values are required dependent on whether you are using System / User Assigned Identities or a Key based authentication.
32
-
2. Adjust `text_2_sql_query_cache.py` with any changes to the index. **There is no provided indexer or skillset for this cache, it is expected that application code will write directly to it.**
32
+
2. Adjust `text_2_sql_query_cache.py` with any changes to the index. **There is no provided indexer or skillset for this cache, it is expected that application code will write directly to it. See the details in the Text2SQL README for different cache strategies.**
33
33
3. Run `deploy.py` with the following args:
34
34
35
35
-`index_type text_2_sql_query_cache`. This selects the `Text2SQLQueryCacheAISearch` sub class.
Copy file name to clipboardExpand all lines: text_2_sql/README.md
+64-13Lines changed: 64 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,6 +29,7 @@ A common way to perform Text2SQL generation _(Iteration 1)_ is to provide the co
29
29
30
30
- More tables / views significantly increases the number of tokens used within the prompt and the cost of inference.
31
31
- More schema information can cause confusion with the LLM. In our original use case, when exceeding 5 complex tables / views, we found that the LLM could get confused between which columns belonged to which entity and as such, would generate invalid SQL queries.
32
+
- Entity relationships between different tables is challenging for the LLM to understand.
32
33
33
34
To solve these issues, a Multi-Shot approach is developed. Below is the iterations of development on the Text2SQL query component.
34
35
@@ -43,6 +44,8 @@ All approaches limit the number of tokens used and avoids filling the prompt wit
43
44
44
45
Using Auto-Function calling capabilities, the LLM is able to retrieve from the plugin the full schema information for the views / tables that it considers useful for answering the question. Once retrieved, the full SQL query can then be generated. The schemas for multiple views / tables can be retrieved to allow the LLM to perform joins and other complex queries.
45
46
47
+
To improve the scalability and accuracy in SQL Query generation, the entity relationships within the database are stored within the vector store. This allows the LLM to use **entity relationship graph** to navigate complex system joins. See the details in `./data_dictionary` for more details.
48
+
46
49
For the query cache enabled approach, AI Search is used as a vector based cache, but any other cache that supports vector queries could be used, such as Redis.
47
50
48
51
### Full Logical Flow for Vector Based Approach
@@ -55,6 +58,15 @@ As the query cache is shared between users (no data is stored in the cache), a n
55
58
56
59

57
60
61
+
### Caching Strategy
62
+
63
+
The cache strategy implementation is a simple way to prove that the system works. You can adopt several different strategies for cache population. Below are some of the strategies that could be used:
64
+
65
+
-**Pre-population:** Run an offline pipeline to generate SQL queries for the known questions that you expect from the user to prevent a 'cold start' problem.
66
+
-**Chat History Management Pipeline:** Run a real-time pipeline that logs the chat history to a database. Within this pipeline, analyse questions that are Text2SQL and generate the cache entry.
67
+
-**Positive Indication System:** Only update the cache when a user positively reacts to a question e.g. a thumbs up from the UI or doesn't ask a follow up question.
68
+
-**Always update:** Always add all questions into the cache when they are asked. The sample code in the repository currently implements this approach, but this could lead to poor SQL queries reaching the cache. One of the other caching strategies would be better production version.
69
+
58
70
### Comparison of Iterations
59
71
|| Common Text2SQL Approach | Prompt Based Multi-Shot Text2SQL Approach | Vector Based Multi-Shot Text2SQL Approach | Vector Based Multi-Shot Text2SQL Approach With Query Cache |
60
72
|-|-|-|-|-|
@@ -152,24 +164,63 @@ Below is a sample entry for a view / table that we which to expose to the LLM. T
152
164
153
165
```json
154
166
{
155
-
"EntityName": "Get All Categories",
156
-
"Entity": "vGetAllCategories",
157
-
"Description": "This view provides a comprehensive list of all product categories and their corresponding subcategories in the SalesLT schema of the AdventureWorksLT database. It is used to understand the hierarchical structure of product categories, facilitating product organization and categorization.",
158
-
"Columns": [
167
+
"Entity": "SalesLT.SalesOrderDetail",
168
+
"Definition": "The SalesLT.SalesOrderDetail entity contains detailed information about individual items within sales orders. This entity includes data on the sales order ID, the specific details of each order item such as quantity, product ID, unit price, and any discounts applied. It also includes calculated fields such as the line total for each order item. This entity can be used to answer questions related to the specifics of sales transactions, such as which products were purchased in each order, the quantity of each product ordered, and the total price of each order item.",
169
+
"EntityName": "Sales Line Items Information",
170
+
"Database": "AdventureWorksLT",
171
+
"Warehouse": null,
172
+
"EntityRelationships": [
159
173
{
160
-
"Definition": "A unique identifier for each product category. This ID is used to reference specific categories.",
161
-
"Name": "ProductCategoryID",
162
-
"Type": "INT"
174
+
"ForeignEntity": "SalesLT.Product",
175
+
"ForeignKeys": [
176
+
{
177
+
"Column": "ProductID",
178
+
"ForeignColumn": "ProductID"
179
+
}
180
+
]
163
181
},
164
182
{
165
-
"Definition": "The name of the parent product category. This represents the top-level category under which subcategories are grouped.",
"Definition": "The SalesOrderID column in the SalesLT.SalesOrderDetail entity contains unique numerical identifiers for each sales order. Each value represents a specific sales order, ensuring that each order can be individually referenced and tracked. The values are in a sequential numeric format, indicating the progression and uniqueness of each sales transaction within the database.",
203
+
"AllowedValues": null,
204
+
"SampleValues": [
205
+
71938,
206
+
71784,
207
+
71935,
208
+
71923,
209
+
71946
210
+
]
168
211
},
169
212
{
170
-
"Definition": "The name of the product category. This can refer to either a top-level category or a subcategory, depending on the context.",
171
-
"Name": "ProductCategoryName",
172
-
"Type": "NVARCHAR(50)"
213
+
"Name": "SalesOrderDetailID",
214
+
"DataType": "int",
215
+
"Definition": "The SalesOrderDetailID column in the SalesLT.SalesOrderDetail entity contains unique identifier values for each sales order detail record. The values are numeric and are used to distinguish each order detail entry within the database. These identifiers are essential for maintaining data integrity and enabling efficient querying and data manipulation within the sales order system.",
Copy file name to clipboardExpand all lines: text_2_sql/data_dictionary/README.md
+56-15Lines changed: 56 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,24 +8,63 @@ Below is a sample entry for a view / table that we which to expose to the LLM. T
8
8
9
9
```json
10
10
{
11
-
"EntityName": "Get All Categories",
12
-
"Entity": "vGetAllCategories",
13
-
"Description": "This view provides a comprehensive list of all product categories and their corresponding subcategories in the SalesLT schema of the AdventureWorksLT database. It is used to understand the hierarchical structure of product categories, facilitating product organization and categorization.",
14
-
"Columns": [
11
+
"Entity": "SalesLT.SalesOrderDetail",
12
+
"Definition": "The SalesLT.SalesOrderDetail entity contains detailed information about individual items within sales orders. This entity includes data on the sales order ID, the specific details of each order item such as quantity, product ID, unit price, and any discounts applied. It also includes calculated fields such as the line total for each order item. This entity can be used to answer questions related to the specifics of sales transactions, such as which products were purchased in each order, the quantity of each product ordered, and the total price of each order item.",
13
+
"EntityName": "Sales Line Items Information",
14
+
"Database": "AdventureWorksLT",
15
+
"Warehouse": null,
16
+
"EntityRelationships": [
15
17
{
16
-
"Definition": "A unique identifier for each product category. This ID is used to reference specific categories.",
17
-
"Name": "ProductCategoryID",
18
-
"Type": "INT"
18
+
"ForeignEntity": "SalesLT.Product",
19
+
"ForeignKeys": [
20
+
{
21
+
"Column": "ProductID",
22
+
"ForeignColumn": "ProductID"
23
+
}
24
+
]
19
25
},
20
26
{
21
-
"Definition": "The name of the parent product category. This represents the top-level category under which subcategories are grouped.",
"Definition": "The SalesOrderID column in the SalesLT.SalesOrderDetail entity contains unique numerical identifiers for each sales order. Each value represents a specific sales order, ensuring that each order can be individually referenced and tracked. The values are in a sequential numeric format, indicating the progression and uniqueness of each sales transaction within the database.",
47
+
"AllowedValues": null,
48
+
"SampleValues": [
49
+
71938,
50
+
71784,
51
+
71935,
52
+
71923,
53
+
71946
54
+
]
24
55
},
25
56
{
26
-
"Definition": "The name of the product category. This can refer to either a top-level category or a subcategory, depending on the context.",
27
-
"Name": "ProductCategoryName",
28
-
"Type": "NVARCHAR(50)"
57
+
"Name": "SalesOrderDetailID",
58
+
"DataType": "int",
59
+
"Definition": "The SalesOrderDetailID column in the SalesLT.SalesOrderDetail entity contains unique identifier values for each sales order detail record. The values are numeric and are used to distinguish each order detail entry within the database. These identifiers are essential for maintaining data integrity and enabling efficient querying and data manipulation within the sales order system.",
60
+
"AllowedValues": null,
61
+
"SampleValues": [
62
+
110735,
63
+
113231,
64
+
110686,
65
+
113257,
66
+
113307
67
+
]
29
68
}
30
69
]
31
70
}
@@ -34,13 +73,15 @@ Below is a sample entry for a view / table that we which to expose to the LLM. T
34
73
## Property Definitions
35
74
-**EntityName** is a human readable name for the entity.
36
75
-**Entity** is the actual name for the entity that is used in the SQL query.
37
-
-**Description** provides a comprehensive description of what information the entity contains.
76
+
-**Definition** provides a comprehensive description of what information the entity contains.
38
77
-**Columns** contains a list of the columns exposed for querying. Each column contains:
39
78
-**Definition** a short definition of what information the column contains. Here you can add extra metadata to **prompt engineer** the LLM to select the right columns or interpret the data in the column correctly.
40
79
-**Name** is the actual column name.
41
-
-**Type** is the datatype for the column.
80
+
-**DataType** is the datatype for the column.
42
81
-**SampleValues (optional)** is a list of sample values that are in the column. This is useful for instructing the LLM of what format the data may be in.
43
82
-**AllowedValues (optional)** is a list of absolute allowed values for the column. This instructs the LLM only to use these values if filtering against this column.
83
+
-**EntityRelationships** contains mapping of the immediate relationships to this entity. Contains details of the foreign keys to join against.
84
+
-**CompleteEntityRelationshipsGraph** contains a directed graph of how this entity relates to all others in the database. The LLM can use this to work out the joins to make.
44
85
45
86
A full data dictionary must be built for all the views / tables you which to expose to the LLM. The metadata provide directly influences the accuracy of the Text2SQL component.
0 commit comments