Skip to content

Commit f59ae83

Browse files
authored
[doc] clean up sources subsections from the sources/index (#1184)
1 parent aaef433 commit f59ae83

File tree

1 file changed

+0
-343
lines changed

1 file changed

+0
-343
lines changed

docs/docs/sources/index.md

Lines changed: 0 additions & 343 deletions
Original file line numberDiff line numberDiff line change
@@ -20,346 +20,3 @@ Related:
2020
- [Life cycle of a indexing flow](/docs/core/basics#life-cycle-of-an-indexing-flow)
2121
- [Live Update Tutorial](/docs/tutorials/live_updates)
2222
for change capture mechanisms.
23-
24-
25-
26-
## LocalFile
27-
28-
The `LocalFile` source imports files from a local file system.
29-
30-
### Spec
31-
32-
The spec takes the following fields:
33-
* `path` (`str`): full path of the root directory to import files from
34-
* `binary` (`bool`, optional): whether reading files as binary (instead of text)
35-
* `included_patterns` (`list[str]`, optional): a list of glob patterns to include files, e.g. `["*.txt", "docs/**/*.md"]`.
36-
If not specified, all files will be included.
37-
* `excluded_patterns` (`list[str]`, optional): a list of glob patterns to exclude files, e.g. `["tmp", "**/node_modules"]`.
38-
Any file or directory matching these patterns will be excluded even if they match `included_patterns`.
39-
If not specified, no files will be excluded.
40-
41-
:::info
42-
43-
`included_patterns` and `excluded_patterns` are using Unix-style glob syntax. See [globset syntax](https://docs.rs/globset/latest/globset/index.html#syntax) for the details.
44-
45-
:::
46-
47-
### Schema
48-
49-
The output is a [*KTable*](/docs/core/data_types#ktable) with the following sub fields:
50-
* `filename` (*Str*, key): the filename of the file, including the path, relative to the root directory, e.g. `"dir1/file1.md"`
51-
* `content` (*Str* if `binary` is `False`, *Bytes* otherwise): the content of the file
52-
53-
## AmazonS3
54-
55-
### Setup for Amazon S3
56-
57-
#### Setup AWS accounts
58-
59-
You need to setup AWS accounts to own and access Amazon S3. In particular,
60-
61-
* Setup an AWS account from [AWS homepage](https://aws.amazon.com/) or login with an existing account.
62-
* AWS recommends all programming access to AWS should be done using [IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) instead of root account. You can create an IAM user at [AWS IAM Console](https://console.aws.amazon.com/iam/home).
63-
* Make sure your IAM user at least have the following permissions in the IAM console:
64-
* Attach permission policy `AmazonS3ReadOnlyAccess` for read-only access to Amazon S3.
65-
* (optional) Attach permission policy `AmazonSQSFullAccess` to receive notifications from Amazon SQS, if you want to enable change event notifications.
66-
Note that `AmazonSQSReadOnlyAccess` is not enough, as we need to be able to delete messages from the queue after they're processed.
67-
68-
69-
#### Setup Credentials for AWS SDK
70-
71-
AWS SDK needs to access credentials to access Amazon S3.
72-
The easiest way to setup credentials is to run:
73-
74-
```sh
75-
aws configure
76-
```
77-
78-
It will create a credentials file at `~/.aws/credentials` and config at `~/.aws/config`.
79-
80-
See the following documents if you need more control:
81-
82-
* [`aws configure`](https://docs.aws.amazon.com/cli/v1/userguide/cli-configure-files.html)
83-
* [Globally configuring AWS SDKs and tools](https://docs.aws.amazon.com/sdkref/latest/guide/creds-config-files.html)
84-
85-
86-
#### Create Amazon S3 buckets
87-
88-
You can create a Amazon S3 bucket in the [Amazon S3 Console](https://s3.console.aws.amazon.com/s3/home), and upload your files to it.
89-
90-
It's also doable by using the AWS CLI `aws s3 mb` (to create buckets) and `aws s3 cp` (to upload files).
91-
When doing so, make sure your current user also has permission policy `AmazonS3FullAccess`.
92-
93-
#### (Optional) Setup SQS queue for event notifications
94-
95-
You can setup an Amazon Simple Queue Service (Amazon SQS) queue to receive change event notifications from Amazon S3.
96-
It provides a change capture mechanism for your AmazonS3 data source, to trigger reprocessing of your AWS S3 files on any creation, update or deletion. Please use a dedicated SQS queue for each of your S3 data source.
97-
98-
This is how to setup:
99-
100-
* Create a SQS queue with proper access policy.
101-
* In the [Amazon SQS Console](https://console.aws.amazon.com/sqs/home), create a queue.
102-
* Add access policy statements, to make sure Amazon S3 can send messages to the queue.
103-
```json
104-
{
105-
...
106-
"Statement": [
107-
...
108-
{
109-
"Sid": "__publish_statement",
110-
"Effect": "Allow",
111-
"Principal": {
112-
"Service": "s3.amazonaws.com"
113-
},
114-
"Resource": "${SQS_QUEUE_ARN}",
115-
"Action": "SQS:SendMessage",
116-
"Condition": {
117-
"ArnLike": {
118-
"aws:SourceArn": "${S3_BUCKET_ARN}"
119-
}
120-
}
121-
}
122-
]
123-
}
124-
```
125-
126-
Here, you need to replace `${SQS_QUEUE_ARN}` and `${S3_BUCKET_ARN}` with the actual ARN of your SQS queue and S3 bucket.
127-
You can find the ARN of your SQS queue in the existing policy statement (it starts with `arn:aws:sqs:`), and the ARN of your S3 bucket in the S3 console (it starts with `arn:aws:s3:`).
128-
129-
* In the [Amazon S3 Console](https://s3.console.aws.amazon.com/s3/home), open your S3 bucket. Under *Properties* tab, click *Create event notification*.
130-
* Fill in an arbitrary event name, e.g. `S3ChangeNotifications`.
131-
* If you want your AmazonS3 data source to expose a subset of files sharing a prefix, set the same prefix here. Otherwise, leave it empty.
132-
* Select the following event types: *All object create events*, *All object removal events*.
133-
* Select *SQS queue* as the destination, and specify the SQS queue you created above.
134-
135-
AWS's [Guide of Configuring a Bucket for Notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html#step1-create-sqs-queue-for-notification) provides more details.
136-
137-
### Spec
138-
139-
The spec takes the following fields:
140-
* `bucket_name` (`str`): Amazon S3 bucket name.
141-
* `prefix` (`str`, optional): if provided, only files with path starting with this prefix will be imported.
142-
* `binary` (`bool`, optional): whether reading files as binary (instead of text).
143-
* `included_patterns` (`list[str]`, optional): a list of glob patterns to include files, e.g. `["*.txt", "docs/**/*.md"]`.
144-
If not specified, all files will be included.
145-
* `excluded_patterns` (`list[str]`, optional): a list of glob patterns to exclude files, e.g. `["*.tmp", "**/*.log"]`.
146-
Any file or directory matching these patterns will be excluded even if they match `included_patterns`.
147-
If not specified, no files will be excluded.
148-
149-
:::info
150-
151-
`included_patterns` and `excluded_patterns` are using Unix-style glob syntax. See [globset syntax](https://docs.rs/globset/latest/globset/index.html#syntax) for the details.
152-
153-
:::
154-
155-
* `sqs_queue_url` (`str`, optional): if provided, the source will receive change event notifications from Amazon S3 via this SQS queue.
156-
157-
:::info
158-
159-
We will delete messages from the queue after they're processed.
160-
If there are unrelated messages in the queue (e.g. test messages that SQS will send automatically on queue creation, messages for a different bucket, for non-included files, etc.), we will delete the message upon receiving it, to avoid repeatedly receiving irrelevant messages after they're redelivered.
161-
162-
:::
163-
164-
### Schema
165-
166-
The output is a [*KTable*](/docs/core/data_types#ktable) with the following sub fields:
167-
168-
* `filename` (*Str*, key): the filename of the file, including the path, relative to the root directory, e.g. `"dir1/file1.md"`.
169-
* `content` (*Str* if `binary` is `False`, otherwise *Bytes*): the content of the file.
170-
171-
172-
## AzureBlob
173-
174-
The `AzureBlob` source imports files from Azure Blob Storage.
175-
176-
### Setup for Azure Blob Storage
177-
178-
#### Get Started
179-
180-
If you didn't have experience with Azure Blob Storage, you can refer to the [quickstart](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal).
181-
These are actions you need to take:
182-
183-
* Create a storage account in the [Azure Portal](https://portal.azure.com/).
184-
* Create a container in the storage account.
185-
* Upload your files to the container.
186-
* Grant the user / identity / service principal (depends on your authentication method, see below) access to the storage account. At minimum, a **Storage Blob Data Reader** role is needed. See [this doc](https://learn.microsoft.com/en-us/azure/storage/blobs/authorize-data-operations-portal) for reference.
187-
188-
#### Authentication
189-
190-
We support the following authentication methods:
191-
192-
* Shared access signature (SAS) tokens.
193-
You can generate it from the Azure Portal in the settings for a specific container.
194-
You need to provide at least *List* and *Read* permissions when generating the SAS token.
195-
It's a query string in the form of
196-
`sp=rl&st=2025-07-20T09:33:00Z&se=2025-07-19T09:48:53Z&sv=2024-11-04&sr=c&sig=i3FDjsadfklj3%23adsfkk`.
197-
198-
* Storage account access key. You can find it in the Azure Portal in the settings for a specific storage account.
199-
200-
* Default credential. When none of the above is provided, it will use the default credential.
201-
202-
This allows you to connect to Azure services without putting any secrets in the code or flow spec.
203-
It automatically chooses the best authentication method based on your environment:
204-
205-
* On your local machine: uses your Azure CLI login (`az login`) or environment variables.
206-
207-
```sh
208-
az login
209-
# Optional: Set a default subscription if you have more than one
210-
az account set --subscription "<YOUR_SUBSCRIPTION_NAME_OR_ID>"
211-
```
212-
* In Azure (VM, App Service, AKS, etc.): uses the resource’s Managed Identity.
213-
* In automated environments: supports Service Principals via environment variables
214-
* `AZURE_CLIENT_ID`
215-
* `AZURE_TENANT_ID`
216-
* `AZURE_CLIENT_SECRET`
217-
218-
You can refer to [this doc](https://learn.microsoft.com/en-us/azure/developer/python/sdk/authentication/overview) for more details.
219-
220-
### Spec
221-
222-
The spec takes the following fields:
223-
224-
* `account_name` (`str`): the name of the storage account.
225-
* `container_name` (`str`): the name of the container.
226-
* `prefix` (`str`, optional): if provided, only files with path starting with this prefix will be imported.
227-
* `binary` (`bool`, optional): whether reading files as binary (instead of text).
228-
* `included_patterns` (`list[str]`, optional): a list of glob patterns to include files, e.g. `["*.txt", "docs/**/*.md"]`.
229-
If not specified, all files will be included.
230-
* `excluded_patterns` (`list[str]`, optional): a list of glob patterns to exclude files, e.g. `["*.tmp", "**/*.log"]`.
231-
Any file or directory matching these patterns will be excluded even if they match `included_patterns`.
232-
If not specified, no files will be excluded.
233-
* `sas_token` (`cocoindex.TransientAuthEntryReference[str]`, optional): a SAS token for authentication.
234-
* `account_access_key` (`cocoindex.TransientAuthEntryReference[str]`, optional): an account access key for authentication.
235-
236-
:::info
237-
238-
`included_patterns` and `excluded_patterns` are using Unix-style glob syntax. See [globset syntax](https://docs.rs/globset/latest/globset/index.html#syntax) for the details.
239-
240-
:::
241-
242-
### Schema
243-
244-
The output is a [*KTable*](/docs/core/data_types#ktable) with the following sub fields:
245-
246-
* `filename` (*Str*, key): the filename of the file, including the path, relative to the root directory, e.g. `"dir1/file1.md"`.
247-
* `content` (*Str* if `binary` is `False`, otherwise *Bytes*): the content of the file.
248-
249-
250-
## GoogleDrive
251-
252-
The `GoogleDrive` source imports files from Google Drive.
253-
254-
### Setup for Google Drive
255-
256-
To access files in Google Drive, the `GoogleDrive` source will need to authenticate by service accounts.
257-
258-
1. Register / login in **Google Cloud**.
259-
2. In [**Google Cloud Console**](https://console.cloud.google.com/), search for *Service Accounts*, to enter the *IAM & Admin / Service Accounts* page.
260-
- **Create a new service account**: Click *+ Create Service Account*. Follow the instructions to finish service account creation.
261-
- **Add a key and download the credential**: Under "Actions" for this new service account, click *Manage keys* → *Add key* → *Create new key* → *JSON*.
262-
Download the key file to a safe place.
263-
3. In **Google Cloud Console**, search for *Google Drive API*. Enable this API.
264-
4. In **Google Drive**, share the folders containing files that need to be imported through your source with the service account's email address.
265-
**Viewer permission** is sufficient.
266-
- The email address can be found under the *IAM & Admin / Service Accounts* page (in Step 2), in the format of `{service-account-id}@{gcp-project-id}.iam.gserviceaccount.com`.
267-
- Copy the folder ID. Folder ID can be found from the last part of the folder's URL, e.g. `https://drive.google.com/drive/u/0/folders/{folder-id}` or `https://drive.google.com/drive/folders/{folder-id}?usp=drive_link`.
268-
269-
270-
### Spec
271-
272-
The spec takes the following fields:
273-
274-
* `service_account_credential_path` (`str`): full path to the service account credential file in JSON format.
275-
* `root_folder_ids` (`list[str]`): a list of Google Drive folder IDs to import files from.
276-
* `binary` (`bool`, optional): whether reading files as binary (instead of text).
277-
* `recent_changes_poll_interval` (`datetime.timedelta`, optional): when set, this source provides a change capture mechanism by polling Google Drive for recent modified files periodically.
278-
279-
:::info
280-
281-
Since it only retrieves metadata for recent modified files (up to the previous poll) during polling,
282-
it's typically cheaper than a full refresh by setting the [refresh interval](/docs/core/flow_def#refresh-interval) especially when the folder contains a large number of files.
283-
So you can usually set it with a smaller value compared to the `refresh_interval`.
284-
285-
On the other hand, this only detects changes for files that still exist.
286-
If the file is deleted (or the current account no longer has access to it), this change will not be detected by this change stream.
287-
288-
So when a `GoogleDrive` source has `recent_changes_poll_interval` enabled, it's still recommended to set a `refresh_interval`, with a larger value.
289-
So that most changes can be covered by polling recent changes (with low latency, like 10 seconds), and remaining changes (files no longer exist or accessible) will still be covered (with a higher latency, like 5 minutes, and should be larger if you have a huge number of files like 1M).
290-
In reality, configure them based on your requirement: how fresh do you need the target index to be?
291-
292-
:::
293-
294-
### Schema
295-
296-
The output is a [*KTable*](/docs/core/data_types#ktable) with the following sub fields:
297-
298-
* `file_id` (*Str*, key): the ID of the file in Google Drive.
299-
* `filename` (*Str*): the filename of the file, without the path, e.g. `"file1.md"`
300-
* `mime_type` (*Str*): the MIME type of the file.
301-
* `content` (*Str* if `binary` is `False`, otherwise *Bytes*): the content of the file.
302-
303-
304-
## Postgres
305-
306-
The `Postgres` source imports rows from a PostgreSQL table.
307-
308-
### Setup for PostgreSQL
309-
310-
* Ensure the table exists and has a primary key. Tables without a primary key are not supported.
311-
* Grant the connecting user read permissions on the target table (e.g. `SELECT`).
312-
* Provide a database connection. You can:
313-
* Use CocoIndex's default database connection, or
314-
* Provide an explicit connection via a transient auth entry referencing a `DatabaseConnectionSpec` with a `url`, for example:
315-
316-
```python
317-
cocoindex.add_transient_auth_entry(
318-
cocoindex.sources.DatabaseConnectionSpec(
319-
url="postgres://user:password@host:5432/dbname?sslmode=require",
320-
)
321-
)
322-
```
323-
324-
### Spec
325-
326-
The spec takes the following fields:
327-
328-
* `table_name` (`str`): the PostgreSQL table to read from.
329-
* `database` (`cocoindex.TransientAuthEntryReference[DatabaseConnectionSpec]`, optional): database connection reference. If not provided, the default CocoIndex database is used.
330-
* `included_columns` (`list[str]`, optional): non-primary-key columns to include. If not specified, all non-PK columns are included.
331-
* `ordinal_column` (`str`, optional): to specify a non-primary-key column used for change tracking and ordering, e.g. can be a modified timestamp or a monotonic version number. Supported types are integer-like (`bigint`/`integer`) and timestamps (`timestamp`, `timestamptz`).
332-
`ordinal_column` must not be a primary key column.
333-
* `notification` (`cocoindex.sources.PostgresNotification`, optional): when present, enable change capture based on Postgres LISTEN/NOTIFY. It has the following fields:
334-
* `channel_name` (`str`, optional): the Postgres notification channel to listen on. CocoIndex will automatically create the channel with the given name. If omitted, CocoIndex uses `{flow_name}__{source_name}__cocoindex`.
335-
336-
:::info
337-
338-
If `notification` is provided, CocoIndex listens for row changes using Postgres LISTEN/NOTIFY and creates the required database objects on demand when the flow starts listening:
339-
340-
- Function to create notification message: `{channel_name}_n`.
341-
- Trigger to react to table changes: `{channel_name}_t` on the specified `table_name`.
342-
343-
Creation is automatic when listening begins.
344-
345-
Currently CocoIndex doesn't automatically clean up these objects when the flow is dropped (unlike targets)
346-
It's usually OK to leave them as they are, but if you want to clean them up, you can run the following SQL statements to manually drop them:
347-
348-
```sql
349-
DROP TRIGGER IF EXISTS {channel_name}_t ON "{table_name}";
350-
DROP FUNCTION IF EXISTS {channel_name}_n();
351-
```
352-
353-
:::
354-
355-
### Schema
356-
357-
The output is a [*KTable*](/docs/core/data_types#ktable) with straightforward 1 to 1 mapping from Postgres table columns to CocoIndex table fields:
358-
359-
* Key fields: All primary key columns in the Postgres table will be included automatically as key fields.
360-
* Value fields: All non-primary-key columns in the Postgres table (included by `included_columns` or all when not specified) appear as value fields.
361-
362-
### Example
363-
364-
You can find end-to-end example using Postgres source at:
365-
* [examples/postgres_source](https://github.com/cocoindex-io/cocoindex/tree/main/examples/postgres_source)

0 commit comments

Comments
 (0)