You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
refactor(targets): rename storage to target (#625)
* docs(targets): rename `storage` to `target`
* refactor(targets): update Python to use `storages` instead of `targets`
* refactor(targets): rename `storages` to `targets` for remaining code
* examples: revert upgrades for now - will bring back after release
Copy file name to clipboardExpand all lines: docs/docs/core/basics.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,15 +9,15 @@ An **index** is a collection of data stored in a way that is easy for retrieval.
9
9
10
10
CocoIndex is an ETL framework for building indexes from specified data sources, a.k.a. **indexing**. It also offers utilities for users to retrieve data from the indexes.
11
11
12
-
An **indexing flow** extracts data from specified data sources, upon specified transformations, and puts the transformed data into specified storage for later retrieval.
12
+
An **indexing flow** extracts data from specified data sources, upon specified transformations, and puts the transformed data into specified target for later retrieval.
13
13
14
14
## Indexing flow elements
15
15
16
16
An indexing flow has two aspects: data and operations on data.
17
17
18
18
### Data
19
19
20
-
An indexing flow involves source data and transformed data (either as an intermediate result or the final result to be put into storage). All data within the indexing flow has **schema** determined at flow definition time.
20
+
An indexing flow involves source data and transformed data (either as an intermediate result or the final result to be put into targets). All data within the indexing flow has **schema** determined at flow definition time.
21
21
22
22
Each piece of data has a **data type**, falling into one of the following categories:
23
23
@@ -36,8 +36,8 @@ An **operation** in an indexing flow defines a step in the flow. An operation is
36
36
***Action**, which defines the behavior of the operation, e.g. *import*, *transform*, *for each*, *collect* and *export*.
37
37
See [Flow Definition](flow_def) for more details for each action.
38
38
39
-
* Some actions (i.e. "import", "transform" and "export") require an **Operation Spec**, which describes the specific behavior of the operation, e.g. a source to import from, a function describing the transformation behavior, a target storage to export to (as an index).
40
-
* Each operation spec has a **operation type**, e.g. `LocalFile` (data source), `SplitRecursively` (function), `SentenceTransformerEmbed` (function), `Postgres` (storage).
39
+
* Some actions (i.e. "import", "transform" and "export") require an **Operation Spec**, which describes the specific behavior of the operation, e.g. a source to import from, a function describing the transformation behavior, a target to export to (as an index).
40
+
* Each operation spec has a **operation type**, e.g. `LocalFile` (data source), `SplitRecursively` (function), `SentenceTransformerEmbed` (function), `Postgres` (target).
41
41
* CocoIndex framework maintains a set of supported operation types. Users can also implement their own.
42
42
43
43
"import" and "transform" operations produce output data, whose data type is determined based on the operation spec and data types of input data (for "transform" operation only).
@@ -62,11 +62,11 @@ This shows schema and example data for the indexing flow:
62
62
63
63
## Life cycle of an indexing flow
64
64
65
-
An indexing flow, once set up, maintains a long-lived relationship between data source and data in target storage. This means:
65
+
An indexing flow, once set up, maintains a long-lived relationship between data source and target. This means:
66
66
67
-
1. The target storage created by the flow remain available for querying at any time
67
+
1. The target created by the flow remain available for querying at any time
68
68
69
-
2. As source data changes (new data added, existing data updated or deleted), data in the target storage are updated to reflect those changes,
69
+
2. As source data changes (new data added, existing data updated or deleted), data in the target are updated to reflect those changes,
70
70
on certain pace, according to the update mode:
71
71
72
72
***One time update**: Once triggered, CocoIndex updates the target data to reflect the version of source data up to the current moment.
Copy file name to clipboardExpand all lines: docs/docs/core/cli.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ The following subcommands are available:
61
61
| ---------- | ----------- |
62
62
|`ls`| List all flows present in the given file/module. Or list all persisted flows under the current app namespace if no file/module specified. |
63
63
|`show`| Show the spec and schema for a specific flow. |
64
-
|`setup`| Check and apply backend setup changes for flows, including the internal and target storage (to export). |
64
+
|`setup`| Check and apply backend setup changes for flows, including the internal storage and target (to export). |
65
65
|`drop`| Drop the backend setup for specified flows. |
66
66
|`update`| Update the index defined by the flow. |
67
67
|`evaluate`| Evaluate the flow and dump flow outputs to files. Instead of updating the index, it dumps what should be indexed to files. Mainly used for evaluation purpose. |
Copy file name to clipboardExpand all lines: docs/docs/core/data_types.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ This makes schema of data processed by CocoIndex clear, and easily determine the
13
13
14
14
You don't need to spell out data types in CocoIndex, when you define the flow using existing operations (source, function, etc).
15
15
These operations decide data types of fields produced by them based on the spec and input data types.
16
-
All you need to do is to make sure the data passed to functions and storage targets are accepted by them.
16
+
All you need to do is to make sure the data passed to functions and targets are accepted by them.
17
17
18
18
When you define [custom functions](/docs/core/custom_function), you need to specify the data types of arguments and return values.
19
19
@@ -40,7 +40,7 @@ This is the list of all basic types supported by CocoIndex:
40
40
| Vector[*T*, *Dim*?]|*T* can be a basic type or a numeric type. *Dim* is a positive integer and optional. |`cocoindex.Vector[T]` or `cocoindex.Vector[T, Dim]`|`numpy.typing.NDArray[T]` or `list[T]`|
41
41
42
42
Values of all data types can be represented by values in Python's native types (as described under the Native Python Type column).
43
-
However, the underlying execution engine and some storage system (like Postgres) has finer distinctions for some types, specifically:
43
+
However, the underlying execution engine has finer distinctions for some types, specifically:
44
44
45
45
**Float32* and *Float64* for `float`, with different precision.
46
46
**LocalDateTime* and *OffsetDateTime* for `datetime.datetime`, with different timezone awareness.
@@ -50,7 +50,7 @@ However, the underlying execution engine and some storage system (like Postgres)
50
50
51
51
The native Python type is always more permissive and can represent a superset of possible values.
52
52
* Only when you annotate the return type of a custom function, you should use the specific type,
53
-
so that CocoIndex will have information about the precise type to be used in the execution engine and storage system.
53
+
so that CocoIndex will have information about the precise type to be used in the execution engine and target.
54
54
* For all other purposes, e.g. to provide annotation for argument types of a custom function, or used internally in your custom function,
Copy file name to clipboardExpand all lines: docs/docs/core/flow_def.mdx
+18-18Lines changed: 18 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
---
2
2
title: Flow Definition
3
-
description: Define a CocoIndex flow, by specifying source, transformations and storages, and connect input/output data of them.
3
+
description: Define a CocoIndex flow, by specifying source, transformations and targets, and connect input/output data of them.
4
4
---
5
5
6
6
importTabsfrom'@theme/Tabs';
7
7
importTabItemfrom'@theme/TabItem';
8
8
9
9
# CocoIndex Flow Definition
10
10
11
-
In CocoIndex, to define an indexing flow, you provide a function to import source, transform data and put them into target storage (sinks).
11
+
In CocoIndex, to define an indexing flow, you provide a function to import source, transform data and put them into targets.
12
12
You connect input/output of these operations with fields of data scopes.
13
13
14
14
## Entry Point
@@ -246,14 +246,14 @@ and generates a `id` field with UUID and remains stable when `filename` and `sum
246
246
247
247
### Export
248
248
249
-
The `export()` method exports the collected data to an external storage.
249
+
The `export()` method exports the collected data to an external target.
250
250
251
-
A *storage spec* needs to be provided for any export operation, to describe the storage and parameters related to the storage.
251
+
A *target spec* needs to be provided for any export operation, to describe the target and parameters related to the target.
252
252
253
253
Exportmusthappenatthetoplevelofaflow, i.e. notwithinanychildscopescreatedby"for each row". Ittakesthefollowingarguments:
254
254
255
255
*`name`: the name to identify the export target.
256
-
*`target_spec`: the storage spec as the export target.
256
+
*`target_spec`: the target spec as the export target.
257
257
*`setup_by_user` (optional):
258
258
whether the export target is setup by user.
259
259
By default, CocoIndex is managing the target setup (surfaced by the `cocoindex setup` CLI subcommand), e.g. create related tables/collections/etc. with compatible schema, and update them upon change.
The target storage is managed by CocoIndex, i.e. it'll be created by [CocoIndex CLI](/docs/core/cli) when you run `cocoindex setup`, and the data will be automatically updated (including stale data removal) when updating the index.
282
-
The `name` for the same storage should remain stable across different runs.
283
-
If it changes, CocoIndex will treat it as an old storage removed and a new one created, and perform setup changes and reindexing accordingly.
281
+
The target is managed by CocoIndex, i.e. it'll be created by [CocoIndex CLI](/docs/core/cli) when you run `cocoindex setup`, and the data will be automatically updated (including stale data removal) when updating the index.
282
+
The `name` for the same target should remain stable across different runs.
283
+
If it changes, CocoIndex will treat it as an old target removed and a new one created, and perform setup changes and reindexing accordingly.
284
284
285
285
## Storage Indexes
286
286
287
-
Many storage supports indexes, to boost efficiency in retrieving data.
288
-
CocoIndex provides a common way to configure indexes for various storages.
287
+
Many targets are storage systems supporting indexes, to boost efficiency in retrieving data.
288
+
CocoIndex provides a common way to configure indexes for various targets.
@@ -345,8 +345,8 @@ It will use `Staging__doc_embeddings` as the collection name if the current app
345
345
346
346
### Target Declarations
347
347
348
-
Most time a target storage is created by calling `export()` method on a collector, and this `export()` call comes with configurations needed for the target storage, e.g. options for storage indexes.
349
-
Occasionally, you may need to specify some configurations for target storage out of the context of any specific data collector.
348
+
Most time a target is created by calling `export()` method on a collector, and this `export()` call comes with configurations needed for the target, e.g. options for storage indexes.
349
+
Occasionally, you may need to specify some configurations for the target out of the context of any specific data collector.
350
350
351
351
For example, for graph database targets like `Neo4j` and `Kuzu`, you may have a data collector to export data to relationships, which will create nodes referenced by various relationships in turn.
352
352
These nodes don't directly come from any specific data collector (consider relationships from different data collectors may share the same nodes).
@@ -359,7 +359,7 @@ To specify configurations for these nodes, you can *declare* spec for related no
359
359
360
360
```python
361
361
flow_builder.declare(
362
-
cocoindex.storages.Neo4jDeclarations(...)
362
+
cocoindex.targets.Neo4jDeclarations(...)
363
363
)
364
364
```
365
365
@@ -389,7 +389,7 @@ You can add an auth entry by `cocoindex.add_auth_entry()` function, which return
389
389
```python
390
390
my_graph_conn = cocoindex.add_auth_entry(
391
391
"my_graph_conn",
392
-
cocoindex.storages.Neo4jConnectionSpec(
392
+
cocoindex.targets.Neo4jConnectionSpec(
393
393
uri="bolt://localhost:7687",
394
394
user="neo4j",
395
395
password="cocoindex",
@@ -403,7 +403,7 @@ Then reference it when building a spec that takes an auth entry:
0 commit comments