You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/autorag/configuration/data-source/website.mdx
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,13 +14,13 @@ Refer to [Onboard a domain](/fundamentals/manage-domains/add-site/) for more inf
14
14
:::
15
15
16
16
## How website crawling works
17
-
When you connect a domain, the crawler looks for your site’s sitemap to determine which pages to visit:
17
+
When you connect a domain, the crawler looks for your website’s sitemap to determine which pages to visit:
18
18
19
-
1. The crawler first checks for a sitemap at `/sitemap.xml`.
20
-
2. If no sitemap is found, it checks `robots.txt`for listed sitemaps.
19
+
1. The crawler first checks the `robots.txt` for listed sitemaps. If it exists, it reads all sitemaps existing inside.
20
+
2. If no `robots.txt` is found, the crawler first checks for a sitemap at `/sitemap.xml`.
21
21
3. If no sitemap is available, the domain cannot be crawled.
22
22
23
-
Pages are visited in the order defined by your sitemap.
23
+
Pages are visited, according to the `<priority>` attribute set on the sitemaps, if this field is defined.
24
24
25
25
## Parsing options
26
26
You can choose how pages are parsed during crawling:
@@ -31,8 +31,12 @@ You can choose how pages are parsed during crawling:
31
31
## Storage
32
32
During setup, AutoRAG creates a dedicated R2 bucket in your account to store the pages that have been crawled and downloaded as HTML files. This bucket is automatically managed and is used only for content discovered by the crawler. Any files or objects that you add directly to this bucket will not be indexed.
33
33
34
+
We recommend not to modify the bucket as it may distrupt the indexing flow and cause content to not be updated properly.
35
+
34
36
## Sync and updates
35
-
During scheduled or manual [sync jobs](/autorag/configuration/indexing/), the crawler will check for changes on your website. If a page changes, the updated version is stored in the R2 bucket and automatically reindexed so that your search results always reflect the latest content.
37
+
During scheduled or manual [sync jobs](/autorag/configuration/indexing/), the crawler will check for changes to the `<lastmod>` attribute in your sitemap. If it has been changed to a date occuring after the last sync date, then the page will be crawled, the updated version is stored in the R2 bucket, and automatically reindexed so that your search results always reflect the latest content.
38
+
39
+
If the `<lastmod>` attribute is not defined, then AutoRAG will automatically crawl each link defined in the sitemap once a day.
36
40
37
41
## Limits
38
42
The regular AutoRAG [limits](/autorag/platform/limits-pricing/) apply when using the Website data source.
0 commit comments