diff --git a/.vscode/settings.json b/.vscode/settings.json
new file mode 100644
index 000000000..795857a39
--- /dev/null
+++ b/.vscode/settings.json
@@ -0,0 +1,7 @@
+{
+ "cSpell.words": [
+ "cocoindex",
+ "reindexing",
+ "timedelta"
+ ]
+}
\ No newline at end of file
diff --git a/docs/docs/core/flow_def.mdx b/docs/docs/core/flow_def.mdx
index 7bacfc1ff..15c07f6cd 100644
--- a/docs/docs/core/flow_def.mdx
+++ b/docs/docs/core/flow_def.mdx
@@ -49,12 +49,55 @@ See [Flow Running](/docs/core/flow_methods) for more details on it.
-## Flow Builder
+## Data Scope
+
+A **data scope** represents data for a certain unit, e.g. the top level scope (involving all data for a flow), for a document, or for a chunk.
+A data scope has a bunch of fields and collectors, and users can add new fields and collectors to it.
+
+### Get or Add a Field
+
+You can get or add a field of a data scope (which is a data slice).
+
+:::note
-The `FlowBuilder` object is the starting point to construct a flow.
+You cannot override an existing field.
+
+:::
+
+
+
+
+Getting and setting a field of a data scope is done by the `[]` operator with a field name:
+
+```python
+@cocoindex.flow_def(name="DemoFlow")
+def demo_flow(flow_builder: cocoindex.FlowBuilder, data_scope: cocoindex.DataScope):
+
+ # Add "documents" to the top-level data scope.
+ data_scope["documents"] = flow_builder.add_source(DemoSourceSpec(...))
+
+ # Each row of "documents" is a child scope.
+ with data_scope["documents"].row() as document:
+
+ # Get "content" from the document scope, transform, and add "summary" to scope.
+ document["summary"] = field1_row["content"].transform(DemoFunctionSpec(...))
+```
+
+
+
+
+### Add a collector
+
+See [Data Collector](#data-collector) below for more details.
+
+## Data Slice
+
+A **data slice** references a subset of data belonging to a data scope, e.g. a specific field from a data scope.
+A data slice has a certain data type, and it's the input for most operations.
### Import from source
+To get the initial data slice, we need to start from importing data from a source.
`FlowBuilder` provides a `add_source()` method to import data from external sources.
A *source spec* needs to be provided for any import operation, to describe the source and parameters related to the source.
Import must happen at the top level, and the field created by import must be in the top-level struct.
@@ -72,10 +115,6 @@ def demo_flow(flow_builder: cocoindex.FlowBuilder, data_scope: cocoindex.DataSco
-`add_source()` returns a `DataSlice`. Once external data sources are imported, you can further transform them using methods exposed by these data objects, as discussed in the following sections.
-
-We'll describe different data objects in next few sections.
-
:::note
The actual value of data is not available at the time when we define the flow: it's only available at runtime.
@@ -111,51 +150,6 @@ and only perform transformations on changed source keys.
:::
-## Data Scope
-
-A **data scope** represents data for a certain unit, e.g. the top level scope (involving all data for a flow), for a document, or for a chunk.
-A data scope has a bunch of fields and collectors, and users can add new fields and collectors to it.
-
-### Get or Add a Field
-
-You can get or add a field of a data scope (which is a data slice).
-
-:::note
-
-You cannot override an existing field.
-
-:::
-
-
-
-
-Getting and setting a field of a data scope is done by the `[]` operator with a field name:
-
-```python
-@cocoindex.flow_def(name="DemoFlow")
-def demo_flow(flow_builder: cocoindex.FlowBuilder, data_scope: cocoindex.DataScope):
-
- # Add "documents" to the top-level data scope.
- data_scope["documents"] = flow_builder.add_source(DemoSourceSpec(...))
-
- # Each row of "documents" is a child scope.
- with data_scope["documents"].row() as document:
-
- # Get "content" from the document scope, transform, and add "summary" to scope.
- document["summary"] = field1_row["content"].transform(DemoFunctionSpec(...))
-```
-
-
-
-
-### Add a collector
-
-See [Data Collector](#data-collector) below for more details.
-
-## Data Slice
-
-A **data slice** references a subset of data belonging to a data scope, e.g. a specific field from a data scope.
-A data slice has a certain data type, and it's the input for most operations.
### Transform
@@ -164,7 +158,7 @@ A *function spec* needs to be provided for any transform operation, to describe
The function takes one or multiple data arguments.
The first argument is the data slice to be transformed, and the `transform()` method is applied from it.
-Other arguments can be passed in as positional arguments or keyword arguments, aftert the function spec.
+Other arguments can be passed in as positional arguments or keyword arguments, after the function spec.
@@ -300,6 +294,29 @@ CocoIndex provides a common way to configure indexes for various storages.
## Miscellaneous
+### Target Declarations
+
+Most time a target storage is created by calling `export()` method on a collector, and this `export()` call comes with configurations needed for the target storage, e.g. options for storage indexes.
+Occasionally, you may need to specify some configurations for target storage out of the context of any specific data collector.
+
+For example, for graph database targets like `Neo4j`, you may have a data collector to export data to Neo4j relationships, which will create nodes referenced by various relationships in turn.
+These nodes don't directly come from any specific data collector (consider relationships from different data collectors may share the same nodes).
+To specify configurations for these nodes, you can *declare* spec for related node labels.
+
+`FlowBuilder` provides `declare()` method for this purpose, which takes the spec to declare, as provided by various target types.
+
+
+
+
+```python
+flow_builder.declare(
+ cocoindex.storages.Neo4jDeclarations(...)
+)
+```
+
+
+
+
### Auth Registry
CocoIndex manages an auth registry. It's an in-memory key-value store, mainly to store authentication information for a backend.
@@ -310,11 +327,10 @@ Operation spec is the default way to configure a backend. But it has the followi
* Once an operation is removed after flow definition code change, the spec is also gone.
But we still need to be able to drop the backend (e.g. a table) by `cocoindex setup` or `cocoindex drop`.
-
Auth registry is introduced to solve the problems above. It works as follows:
* You can create new **auth entry** by a key and a value.
-* You can references the entry by the key, and pass it as part of spec for certain operations. e.g. `Neo4jRelationship` takes `connection` field in the form of auth entry reference.
+* You can references the entry by the key, and pass it as part of spec for certain operations. e.g. `Neo4j` takes `connection` field in the form of auth entry reference.
diff --git a/docs/docs/core/flow_methods.mdx b/docs/docs/core/flow_methods.mdx
index c03e970d4..b37c1921b 100644
--- a/docs/docs/core/flow_methods.mdx
+++ b/docs/docs/core/flow_methods.mdx
@@ -62,7 +62,7 @@ This action has two modes:
:::info
For both modes, CocoIndex is performing *incremental processing*,
-i.e. we only performs computations and storage mutations on source data that are changed, or the flow has changed.
+i.e. we only perform computations and storage mutations on source data that are changed, or the flow has changed.
This is to achieve best efficiency.
:::
diff --git a/docs/docs/ops/storages.md b/docs/docs/ops/storages.md
index 5bb571b62..54f5ce7ee 100644
--- a/docs/docs/ops/storages.md
+++ b/docs/docs/ops/storages.md
@@ -1,24 +1,61 @@
---
title: Storages
description: CocoIndex Built-in Storages
+toc_max_heading_level: 4
---
# CocoIndex Built-in Storages
-## Postgres
+For each target storage, data are exported from a data collector, containing data of multiple entries, each with multiple fields.
+The way to map data from a data collector to a target storage depends on data model of the target storage.
+
+## Entry-Oriented Targets
+
+Entry-Oriented Storage organizes data into independent entries, such as rows, key-value pairs, or documents.
+Each entry is self-contained and does not explicitly link to others.
+There is usually a straightforward mapping from data collector rows to entries.
+
+### Postgres
Exports data to Postgres database (with pgvector extension).
+#### Data Mapping
+
+Here's how CocoIndex data elements map to Postgres elements during export:
+
+| CocoIndex Element | Postgres Element |
+|-------------------|------------------|
+| an export target | a unique table |
+| a collected row | a row |
+| a field | a column |
+
+For example, if you have a data collector that collects rows with fields `id`, `title`, and `embedding`, it will be exported to a Postgres table with corresponding columns.
+It should be a unique table, meaning that no other export target should export to the same table.
+
+#### Spec
+
The spec takes the following fields:
* `database_url` (type: `str`, optional): The URL of the Postgres database to use as the internal storage, e.g. `postgres://cocoindex:cocoindex@localhost/cocoindex`. If unspecified, will use the same database as the [internal storage](/docs/core/basics#internal-storage).
* `table_name` (type: `str`, optional): The name of the table to store to. If unspecified, will generate a new automatically. We recommend specifying a name explicitly if you want to directly query the table. It can be omitted if you want to use CocoIndex's query handlers to query the table.
-## Qdrant
+### Qdrant
Exports data to a [Qdrant](https://qdrant.tech/) collection.
+#### Data Mapping
+
+Here's how CocoIndex data elements map to Qdrant elements during export:
+
+| CocoIndex Element | Qdrant Element |
+|-------------------|------------------|
+| an export target | a unique collection |
+| a collected row | a point |
+| a field | a named vector (for fields with vector type); a field within payload (otherwise) |
+
+#### Spec
+
The spec takes the following fields:
* `collection_name` (type: `str`, required): The name of the collection to export the data to.
@@ -46,9 +83,97 @@ doc_embeddings.export(
You can find an end-to-end example [here](https://github.com/cocoindex-io/cocoindex/tree/main/examples/text_embedding_qdrant).
-## Neo4j
+## Property Graph Targets
+
+Property graph is a graph data model where both nodes and relationships can have properties.
+
+### Data Mapping
+
+In CocoIndex, you can export data to property graph databases.
+This usually involves more than one collectors, and you export them to different types of graph elements (nodes and relationships).
+In particular,
+
+1. You can export rows from some collectors to nodes in the graph.
+2. You can export rows from some other collectors to relationships in the graph.
+3. Some nodes referenced by relationships exported in 2 may not exist as nodes exported in 1.
+ CocoIndex will automatically create and keep these nodes, as long as they're still referenced by at least one relationship.
+ This guarantees that all relationships exported in 2 are valid.
+
+We provide common types `NodeMapping`, `RelationshipMapping`, and `ReferencedNode`, to configure for each situation.
+They're agnostic to specific graph databases.
+
+#### Nodes
+
+Here's how CocoIndex data elements map to nodes in the graph:
+
+| CocoIndex Element | Graph Element |
+|-------------------|------------------|
+| an export target | nodes with a unique label |
+| a collected row | a node |
+| a field | a property of node |
+
+Note that the label used in different `NodeMapping`s should be unique.
+
+`cocoindex.storages.NodeMapping` is to describe mapping to nodes. It has the following fields:
+
+* `label` (type: `str`): The label of the node.
+
+For example, if you have a data collector that collects rows with fields `id`, `name` and `gender`, it can be exported to a node with label `Person` and properties `id` `name` and `gender`.
-If you don't have a Postgres database, you can start a Postgres SQL database for cocoindex using our docker compose config:
+#### Relationships
+
+Here's how CocoIndex data elements map to relationships in the graph:
+
+| CocoIndex Element | Graph Element |
+|-------------------|------------------|
+| an export target | relationships with a unique type |
+| a collected row | a relationship |
+| a field | a property of relationship, or a property of source/target node, based on configuration |
+
+Note that the type used in different `RelationshipMapping`s should be unique.
+
+`cocoindex.storages.RelationshipMapping` is to describe mapping to relationships. It has the following fields:
+
+* `rel_type` (type: `str`): The type of the relationship.
+* `source`/`target` (type: `cocoindex.storages.NodeReferenceMapping`): Specify how to extract source/target node information from the collected row. It has the following fields:
+ * `label` (type: `str`): The label of the node.
+ * `fields` (type: `Sequence[cocoindex.storages.TargetFieldMapping]`): Specify field mappings from the collected rows to node properties, with the following fields:
+ * `source` (type: `str`): The name of the field in the collected row.
+ * `target` (type: `str`, optional): The name of the field to use as the node field. If unspecified, will use the same as `source`.
+
+ :::note Map necessary fields for nodes of relationships
+
+ You need to map the following fields for nodes of each relationship:
+
+ * Make sure all primary key fields for the label are mapped.
+ * Optionally, you can also map non-key fields. If you do so, please make sure all value fields are mapped.
+
+ :::
+
+All fields in the collector that are not used in mappings for source or target node fields will be mapped to relationship properties.
+
+#### Nodes only referenced by relationships
+
+If a node appears as source or target of a relationship, but not exported using `NodeMapping`, CocoIndex will automatically create and keep these nodes until it's no longer referenced by any relationships.
+
+:::note Merge of node values
+
+If the same node (as identified by primary key values) appears multiple times (e.g. they're referenced by different relationships),
+CocoIndex uses value fields provided by an arbitrary one of them.
+The best practice is to make the value fields consistent across different appearances of the same node, to avoid non-determinism in the exported graph.
+
+:::
+
+If a node's label specified in `NodeReferenceMapping` doesn't exist in any `NodeMapping`, you need to [declare](../core/flow_def#target-declarations) a `ReferencedNode` to configure [storage indexes](../core/flow_def#storage-indexes) for nodes with this label.
+The following options are supported:
+
+* `primary_key_fields` (required)
+* `vector_indexes` (optional)
+
+
+### Neo4j
+
+If you don't have a Neo4j database, you can start a Neo4j database using our docker compose config:
```bash
docker compose -f <(curl -L https://raw.githubusercontent.com/cocoindex-io/cocoindex/refs/heads/main/dev/neo4j.yaml) up -d
@@ -69,25 +194,10 @@ The `Neo4j` storage exports each row as a relationship to Neo4j Knowledge Graph.
* `user` (type: `str`): Username for the Neo4j database.
* `password` (type: `str`): Password for the Neo4j database.
* `db` (type: `str`, optional): The name of the Neo4j database to use as the internal storage, e.g. `neo4j`.
-* `mapping`: The mapping from collected row to nodes or relationships of the graph. 2 variations are supported:
- * `cocoindex.storages.NodeMapping`: Each collected row is mapped to a node in the graph. It has the following fields:
- * `label`: The label of the node.
- * `cocoindex.storages.RelationshipMapping`: Each collected row is mapped to a relationship in the graph,
- With the following fields:
-
- * `rel_type` (type: `str`): The type of the relationship.
- * `source`/`target` (type: `cocoindex.storages.NodeReferenceMapping`): The source/target node of the relationship, with the following fields:
- * `label` (type: `str`): The label of the node.
- * `fields` (type: `Sequence[cocoindex.storages.TargetFieldMapping]`): Map fields from the collector to nodes in Neo4j, with the following fields:
- * `source` (type: `str`): The name of the field in the collected row.
- * `target` (type: `str`, optional): The name of the field to use as the node field. If unspecified, will use the same as `source`.
-
- :::info
+* `mapping` (type: `NodeMapping | RelationshipMapping`): The mapping from collected row to nodes or relationships of the graph. 2 variations are supported:
- All fields specified in `fields.source` will be mapped to properties of source/target nodes. All remaining fields will be mapped to relationship properties by default.
+Neo4j also provides a declaration spec `Neo4jDeclaration`, to configure indexing options for nodes only referenced by relationships. It has the following fields:
- :::
+* `connection` (type: auth reference to `Neo4jConnectionSpec`)
+* `relationships` (type: `Sequence[ReferencedNode]`)
- * `nodes_storage_spec` (type: `dict[str, cocoindex.storages.NodeStorageSpec]`): This configures indexes for different node labels. Key is the node label. The value type `NodeStorageSpec` has the following fields to configure [storage indexes](../core/flow_def#storage-indexes) for the node.
- * `primary_key_fields` is required.
- * `vector_indexes` is also supported and optional.
\ No newline at end of file