Skip to content

Commit 1505d8c

Browse files
committed
incremental indexing
1 parent 2ea0ed3 commit 1505d8c

File tree

2 files changed

+158
-10
lines changed

2 files changed

+158
-10
lines changed
Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
---
2+
title: Introduction to Incremental Indexing (preview) - Azure Search
3+
description: Configure your AI enrichment pipeline to drive your data to eventual consistency to handle any updates to skills, skillsets, indexers or data sources
4+
manager: nitinme
5+
author: Vkurpad
6+
services: search
7+
ms.service: search
8+
ms.subservice: cognitive-search
9+
ms.topic: overview
10+
ms.date: 09/28/2019
11+
ms.author: vikurpad
12+
13+
---
14+
# What is incremental indexing in Azure Search?
15+
16+
> [!Note]
17+
> Incremental indexing is in preview and not intended for production use. The [REST API version 2019-05-06-Preview](search-api-preview.md) provides this feature. There is no .NET SDK support at this time.
18+
>
19+
20+
Incremental indexing is a feature of Azure Search that brings a declarative approach to indexing your data. Indexers in Azure Search add documents to your search index from a data source. Indexers track updates to the documents in your data sources and update the index with the new or updated documents from the data source. Indexers can be executed on a recurring schedule ensuring the data source and the index are eventually consistent. Incremental indexing is a new feature that extends change tracking from only the data source to all aspects of the enrichment pipeline. With incremental indexing the indexer will drive your documents to eventual consistency with your data source, the current version of your skillset and the indexer.
21+
22+
Indexers have a few key characteristics:
23+
24+
1. Data source specific
25+
2. State aware
26+
3. Can be configured to drive eventual consistency between your data source and index.
27+
28+
With skillsets for AI based enrichments in cognitive search, you were responsible for versioning your skillset and determining the best course of action when a skill was either updated, added or deleted. Adding or updating skills required you to either rerun all the skills on the entire corpus, essentially a reset on your indexer, or tolerate version drift, where different documents in your index were enriched with different versions of your skillset.
29+
With the latest update to the preview release of the API (version 2019-05-06-Preview), the indexer state management is being expanded from only the data source and indexer field mappings to also include the skillset, output field mappings and projections. Incremental indexing vastly improves the efficiency of your enrichment pipeline. It eliminates the choice of accepting the potentially large cost of re-enriching the entire corpus of documents when a skill is added or updated or dealing with the version drift where documents created/updated with different versions of the skillset and are very different in shape and/or quality of enrichments.
30+
Indexers now track and respond to changes across your enrichment pipeline by determining which skills have changed and selectively execute only the updated skills and any downstream or dependent skills when invoked.
31+
By only configuring incremental indexing, you will be able to ensure that all documents in your index are always processed with the most current version of your enrichment pipeline while performing the least amount of work when responding to changes. Incremental indexing also gives you the granular controls to deal with scenarios where you want full control over determining how a change is handled.
32+
33+
## Indexer Cache
34+
Incremental indexing is made possible with the addition of an indexer cache to the enrichment pipeline. The indexer caches the results from document cracking and the outputs of each skills for every document. When a data source needs to be re-indexed due to a skillset update(new or updated skill), each of the previously enriched documents is read from the cache and only the affected skills, changed or downstream of the changes are re-run. The updated results are written to the cache, the document is updated in the index and the knowledge store.
35+
Physically, the cache is a storage account. All indexes within a search service may share the same storage account for the indexer cache. Each indexer is assigned a unique cache id that is immutable.
36+
37+
### Cache Configuration
38+
39+
You will need to set the cache property on the indexer to start realizing the benefits incremental indexing. Setting this property for the first time will require you to also reset your indexer, `which will result in all documents in your data source being processed again`. The goal of incremental indexing is to make the documents in your index consistent with your data source as defined by the current version of your skillset. To validate this consistency, a reset will ensure that there are no documents in your index from previous versions of the skillset. The indexer needs to be reset to start with a consistent baseline.
40+
41+
### Cache Lifecycle
42+
43+
The lifecycle of the cache is managed by the indexer. If the cache property in the indexer is set to null or the connection string changed, the existing cache is deleted. The cache lifecycle is also tied to the indexer lifecycle and if an indexer is deleted, the associated cache is also deleted.
44+
45+
```json
46+
{
47+
"name": "myIndexerName",
48+
"targetIndexName": "myIndex",
49+
"dataSourceName": "myDatasource",
50+
"skillsetName": "mySkillset",
51+
"cache" : {
52+
"storageConnectionString" : "Your storage account connection string",
53+
"enableReprocessing": true,
54+
"id" : "Auto generated Id you do not need to set"
55+
},
56+
"fieldMappings" : [],
57+
"outputFieldMappings": [],
58+
"parameters":{}
59+
}
60+
61+
```
62+
63+
## Indexer Cache Mode
64+
65+
The indexer cache can operate in modes where data is only written to the cache or data is written to the cache and used to re-enrich documents. You can temporarily suspend incremental enrichment by setting the enableReprocessing property in the cache to false, and later resume incremental enrichment and drive eventual consistency by setting it to true. This is particularly useful when you want to prioritize indexing new documents over ensuring consistency across your corpus of documents.
66+
67+
## Change Detection Override
68+
69+
Incremental indexing gives you granular control over all aspects of the enrichment pipleine. This allows you to deal with situations where a change might have unintended consequences. For example, editing a skillset and updating the URL for a custom skill will result in the indexer invalidating the cached results for that skill. If you are only moving the endpoint to a different VM or redeploying your skill with a new access key, you really don’t want any existing documents reprocessed.
70+
To ensure that that the indexer only performs enrichments you explicitly require, updates to the skillset can optionally set `disableCacheReprocessingChangeDetection` query string parameter to `true`. When set, this parameter will ensure that only updates to the skillset are committed and the change is not evaluated for affects on the existing corpus.
71+
72+
## Cache Invalidation
73+
74+
The converse of that scenario is one where you may deploy a new version of a custom skill, nothing within the enrichment pipleine changes, but you need a specific skill invalidated and all affected documents re-processed to reflect the benefits of an updated model. In such instances you can call the invalidate skills operation on the skillset. The reset skills API accepts a POST request with the list of skill outputs in the cache that should be invalidated. For more information on the reset skills API, see the documentation <link>
75+
76+
## Change Detection
77+
78+
Now that indexers are able to move forward and process new documents and are able to move backwards and drive previously processed documents to consistency, it is important to understand how changes to your enrichment pipeline components result in the indexer performing work on your behalf. The indexer will queue work to be done when it identifies a change that is either invalidating or inconsistent.
79+
80+
### Invalidating Changes
81+
82+
Invalidating changes are rare but have a significant effect on the state of your enrichment pipeline. An invalidating change is one where the entire cache is no longer valid. An example of an invalidating change is one where your data source is updated. For scenarios when you know that the change should not invalidate the cache, like rotating the key on the storage account, the `ignoreResetRequirement` query string parameter should be set to true on the update operation of the specific resource to ensure that the operation is not rejected.
83+
Here is the complete list of changes that would invalidate your cache:
84+
85+
1. Change to your data source type
86+
2. Change to data source container
87+
3. Data source credentials
88+
4. Data source change detection policy
89+
5. Data source delete detection policy
90+
6. Indexer field mappings
91+
7. Indexer parameters
92+
1. Parsing Mode
93+
2. Excluded File Name Extensions
94+
3. Indexed File Name Extensions
95+
4. Index storage metadata only for oversized documents
96+
5. Delimited text headers
97+
6. Delimited text delimiter
98+
7. Document Root
99+
8. Image Action (Changes to how images are extracted)
100+
101+
### Inconsistent Changes
102+
103+
An example of inconsistent change is an update to your skillset where a skill is modified. A portion of the cache is now inconsistent, and the indexer has identified work to perform to make things consistent again.
104+
The complete list of changes resulting in cache inconsistency:
105+
106+
1. Skill in the skillset has different type. The odata type of the skill is updated
107+
2. Skill specific parameters updated, for example the url, defaults or other parameters
108+
3. Skill outputs changes, the skill returns additional or different outputs
109+
4. Skill updates resulting is different ancestry, skill chaining has changed i.e skill inputs
110+
5. Any upstream skill invalidation , if a skill that provides an input to this skill is updated
111+
6. Updates to the knowledge store projection location, results in re-projecting documents.
112+
7. Changes to the knowledge store projections, results in re-projecting documents.
113+
8. Output field mappings changed on an indexer results in re-projecting documents to the index
114+
115+
## Updates To existing APIs
116+
117+
Introducing incremental indexing and enrichment will result in an update to some existing APIs.
118+
119+
### Indexers
120+
121+
Indexers will now expose a new property:
122+
123+
1. Cache
124+
1. StorageAccountConnectionString: The connection string to the storage account that will be used to cache the intermediate results.
125+
2. CacheId: The cacheId is the identifier of the container within the annotationCache storage account that will be used as the cache for this indexer. This cache will be unique to this indexer and if the indexer is deleted and recreated with the same name, the cacheid will be regenerated. The cacheId cannot be set, it is always generated by the service.
126+
3. EnableReprocessing: Set to true by default, when set to false, documents will continue to be written to the cache, but no existing documents will be reprocessed based on the cache data.
127+
128+
### Skillset
129+
130+
Skillset will support a new operation:
131+
132+
1. ResetSkills: The invalidate skills API will accept POST request with a payload containing the list of skill names that need to be invalidated.
133+
134+
## Best practices
135+
136+
The recommended approach to using incremental indexing is to configure incremental indexing by setting the cache property on a new indexer or reset an existing indexer and set the cache property.
137+
Use the ignoreResetRequirement sparingly as it could lead to unintended inconsistency in your data that will not be detected easily.
138+
139+
## Takeaways
140+
141+
Incremental indexing is a powerful feature that allows you to declaratively ensure that your data from the datasource is always consistent with the data in your search index or knowledge store. As your skills, skillsets or enrichments evolve the enrichment pipeline will ensure the least possible work is perfomred to drive your documents to eventual consistency.
142+
143+
## Next steps
144+
145+
Get started with incremental indexing by adding a cache to an existing indexer or add the cache when defining a new indexer.
146+
147+
> [!div class="nextstepaction"]
148+
> [Reference: Create Indexer](cognitive-search-quickstart-blob.md)

articles/search/knowledge-store-projection-overview.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -18,16 +18,18 @@ ms.subservice: cognitive-search
1818
1919
Azure Search enables content enrichment through AI cognitive skills and custom skills as part of indexing. Enrichments add structure to your documents and make searching more effective. In many instances, the enriched documents are useful for scenarios other than search, such as for knowledge mining.
2020

21-
Projections, a component of [knowledge store](knowledge-store-concept-intro.md), are views of enriched documents that can be saved to physical storage for knowledge mining purposes. A projection lets you "project" your data into a shape that aligns with your needs, preserving relationships so that tools like Power BI can read the data with no additional effort.
21+
Projections, a component of [knowledge store](knowledge-store-concept-intro.md), are views of enriched documents that can be saved to physical storage for knowledge mining purposes. A projection lets you "project" your data into a shape that aligns with your needs, preserving relationships so that tools like Power BI can read the data with no additional effort.
2222

23-
Projections can be tabular, with data stored in rows and columns in Azure Table storage, or JSON objects stored in Azure Blob storage. You can define multiple projections of your data as it is being enriched. This is useful when you want the same data shaped differently for individual use cases.
23+
Projections can be tabular, with data stored in rows and columns in Azure Table storage, or JSON objects stored in Azure Blob storage. You can define multiple projections of your data as it is being enriched. This is useful when you want the same data shaped differently for individual use cases.
2424

25-
The knowledge store supports two types of projections:
25+
The knowledge store supports three types of projections:
2626

27-
+ **Tables**: For data that is best represented as rows and columns, table projections allow you to define a schematized shape or projection in Table storage.
27+
+ **Tables**: For data that is best represented as rows and columns, table projections allow you to define a schematized shape or projection in Table storage.
2828

2929
+ **Objects**: When you need a JSON representation of your data and enrichments, object projections are saved as blobs.
3030

31+
+ **Files**: In scenarios where you need to save the images extracted from the documents, file projections allow you to save the normalized images.
32+
3133
To see projections defined in context, step through [How to get started with knowledge store](knowledge-store-howto.md).
3234

3335
## Projection groups
@@ -36,14 +38,12 @@ In some cases, you will need to project your enriched data in different shapes t
3638

3739
### Mutually exclusivity
3840

39-
All content projected into a single group is independent of data projected into other projection groups.
40-
This implies that you can have the same data shaped differently, yet repeated in each projection group.
41-
42-
One constraint enforced in projection groups is the mutual exclusivity of projection types with a projection group. You can only define either table projections or object projections within a single group. If you want both tables and objects, define one projection group for tables, and a second projection group for objects.
41+
All content projected into a single group is independent of data projected into other projection groups.
42+
This implies that you can have the same data shaped differently, yet repeated in each projection group.
4343

4444
### Relatedness
4545

46-
All content projected within a single projection group preserves relationships within the data. Relationships are based on a generated key and each child node retains a reference to the parent node. Relationships do not span projection groups, and tables or objects created in one projection group have no relationship to data generated in other projection groups.
46+
All content projected within a single projection group preserves relationships within the data across projection types. Within tables, relationships are based on a generated key and each child node retains a reference to the parent node. Across types (tables, objects and files), relationships are preserved when a single node is projected across different types. For example, consider a scenario where you have a document containing images and text. You could project the text to tables or objects and the images to files where the tables or objects have a property containing the file URL.
4747

4848
## Input shaping
4949
Getting your data in the right shape or structure is key to effective use, be it tables or objects. The ability to shape or structure your data based on how you plan to access and use it is a key capability exposed as the **Shaper** skill within the skillset.
@@ -62,7 +62,7 @@ You can project a single document in your index into multiple tables, preserving
6262

6363
When defining a table projection within the `knowledgeStore` element of your skillset, start by mapping a node on the enrichment tree to the table source. Typically this node is the output of a **Shaper** skill that you added to the list of skills to produce a specific shape that you need to project into tables. The node you choose to project can be sliced to project into multiple tables. The tables definition is a list of tables that you want to project.
6464

65-
#### Projection Slicing
65+
#### Projection slicing
6666
When defining a table projection group, a single node in the enrichment tree can be sliced into multiple related tables. Adding a table with a source path that is a child of an existing table projection will result in the child node being sliced out of the parent node and projected into the new yet related table. This allows you to define a single node in a shaper skill that can be the source for all of your table projections.
6767

6868
Each table requires three properties:

0 commit comments

Comments
 (0)