You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/d1/best-practices/import-export-data.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ sidebar:
7
7
8
8
D1 allows you to import existing SQLite tables and their data directly, enabling you to migrate existing data into D1 quickly and easily. This can be useful when migrating applications to use Workers and D1, or when you want to prototype a schema locally before importing it to your D1 database(s).
9
9
10
-
D1 also allows you to export a database. This can be useful for [local development](/d1/features/local-development/) or testing.
10
+
D1 also allows you to export a database. This can be useful for [local development](d1/best-practices/local-development/) or testing.
@@ -160,13 +160,13 @@ async function resetTables(session: D1DatabaseSession) {
160
160
161
161
When using D1 without read replication, D1 routes all queries (both read and write) to a specific database instance in [one location in the world](/d1/configuration/data-location/), known as the primary database instance. D1 request latency is dependent on the physical closeness of a user to the primary database instance. Users located further away from the primary database instance experience longer request latency due to [network round-trip time](https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/).
162
162
163
-
When using read replication, D1 introduces multiple “almost up-to-date” copies of the primary database instance which only serve read requests, called <GlossaryTooltipterm="read replica"> read replicas </GlossaryTooltip>. D1 creates the read replicas in multiple regions throughout the world [across the Cloudflare network](/d1/features/read-replication/#read-replica-locations).
163
+
When using read replication, D1 introduces multiple “almost up-to-date” copies of the primary database instance which only serve read requests, called <GlossaryTooltipterm="read replica"> read replicas </GlossaryTooltip>. D1 creates the read replicas in multiple regions throughout the world [across the Cloudflare network](d1/best-practices/read-replication/#read-replica-locations).
164
164
165
165
A user may be located far away from the primary database instance but close to a read replica. When D1 routes read requests to the read replica instead of the primary database instance, the user enjoys shorter read request response times.
166
166
167
167
D1 asynchronously replicates changes from the primary database instance to all read replicas. This means that at any given time, a read replica may be arbitrarily out of date. The time it takes for the latest committed data in the primary database instance to be replicated to the read replica is known as the <GlossaryTooltipterm="replica lag"> replica lag </GlossaryTooltip>.
168
168
169
-
It is replica lag and non-deterministic routing to individual replicas that can lead to application data consistency issues, which Sessions API help handle by ensuring sequential consistency. For more information refer to [Replica lag and consistency model](/d1/features/read-replication/#replica-lag-and-consistency-model).
169
+
It is replica lag and non-deterministic routing to individual replicas that can lead to application data consistency issues, which Sessions API help handle by ensuring sequential consistency. For more information refer to [Replica lag and consistency model](d1/best-practices/read-replication/#replica-lag-and-consistency-model).
170
170
171
171
:::note
172
172
All write queries are still forwarded to the primary database instance. Read replication only improves the query response time for read requests.
@@ -379,7 +379,7 @@ const result = await session.run()
379
379
380
380
{/* #### Example of using `bookmark`
381
381
382
-
This example follows from [Example of using `first-primary`](/d1/features/read-replication/#example-of-using-first-primary), but retrieves the `bookmark` from HTTP cookie.
382
+
This example follows from [Example of using `first-primary`](d1/best-practices/read-replication/#example-of-using-first-primary), but retrieves the `bookmark` from HTTP cookie.
383
383
384
384
```ts collapse={1-10, 22-42, 61-86}
385
385
import { ListBillStatementsResult, GetBillStatementResult, Bill } from './types';
Copy file name to clipboardExpand all lines: src/content/docs/d1/configuration/data-location.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -75,4 +75,4 @@ With read replication enabled, D1 creates and distributes read-only copies of th
75
75
76
76
When using D1 read replication, D1 automatically creates a read replica in [every available region](/d1/configuration/data-location#available-location-hints), including the region where the primary database instance is located.
77
77
78
-
Refer to [D1 read replication](/d1/features/read-replication/) for more information.
78
+
Refer to [D1 read replication](d1/best-practices/read-replication/) for more information.
Copy file name to clipboardExpand all lines: src/content/docs/d1/index.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ D1 is Cloudflare's managed, serverless database with SQLite's SQL semantics, bui
26
26
27
27
D1 is designed for horizontal scale out across multiple, smaller (10 GB) databases, such as per-user, per-tenant or per-entity databases. D1 allows you to build applications with thousands of databases at no extra cost for isolating with multiple databases. D1 pricing is based only on query and storage costs.
28
28
29
-
Create your first D1 database by [following the Get started guide](/d1/get-started/), learn how to [import data into a database](/d1/features/import-export-data/), and how to [interact with your database](/d1/worker-api/) directly from [Workers](/workers/) or [Pages](/pages/functions/bindings/#d1-databases).
29
+
Create your first D1 database by [following the Get started guide](/d1/get-started/), learn how to [import data into a database](d1/best-practices/import-export-data/), and how to [interact with your database](/d1/worker-api/) directly from [Workers](/workers/) or [Pages](/pages/functions/bindings/#d1-databases).
Copy file name to clipboardExpand all lines: src/content/docs/d1/observability/metrics-analytics.mdx
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ Metrics can be queried (and are retained) for the past 31 days.
32
32
D1 returns the number of rows read, rows written (or both) in response to each individual query via [the Workers Binding API](/d1/worker-api/return-object/).
33
33
34
34
Row counts are a precise count of how many rows were read (scanned) or written by that query.
35
-
Inspect row counts to understand the performance and cost of a given query, including whether you can reduce the rows read [using indexes](/d1/features/use-indexes/). Use query counts to understand the total volume of traffic against your databases and to discern which databases are actively in-use.
35
+
Inspect row counts to understand the performance and cost of a given query, including whether you can reduce the rows read [using indexes](d1/best-practices/use-indexes/). Use query counts to understand the total volume of traffic against your databases and to discern which databases are actively in-use.
36
36
37
37
Refer to the [Pricing documentation](/d1/platform/pricing/) for more details on how rows are counted.
The quantity `queryEfficiency` measures how efficient your query was. It is calculated as: the number of rows returned divided by the number of rows read.
324
324
325
-
Generally, you should try to get `queryEfficiency` as close to `1` as possible. Refer to [Use indexes](/d1/features/use-indexes/) for more information on efficient querying.
325
+
Generally, you should try to get `queryEfficiency` as close to `1` as possible. Refer to [Use indexes](d1/best-practices/use-indexes/) for more information on efficient querying.
Copy file name to clipboardExpand all lines: src/content/docs/d1/platform/pricing.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -64,7 +64,7 @@ Yes, any queries you run against your database, including inserting (`INSERT`) e
64
64
65
65
### Can I use an index to reduce the number of rows read by a query?
66
66
67
-
Yes, you can use an index to reduce the number of rows read by a query. [Creating indexes](/d1/features/use-indexes/) for your most queried tables and filtered columns reduces how much data is scanned and improves query performance at the same time. If you have a read-heavy workload (most common), this can be particularly advantageous. Writing to columns referenced in an index will add at least one (1) additional row written to account for updating the index, but this is typically offset by the reduction in rows read due to the benefits of an index.
67
+
Yes, you can use an index to reduce the number of rows read by a query. [Creating indexes](d1/best-practices/use-indexes/) for your most queried tables and filtered columns reduces how much data is scanned and improves query performance at the same time. If you have a read-heavy workload (most common), this can be particularly advantageous. Writing to columns referenced in an index will add at least one (1) additional row written to account for updating the index, but this is typically offset by the reduction in rows read due to the benefits of an index.
68
68
69
69
### Does a freshly created database, and/or an empty table with no rows, contribute to my storage?
The database backup will be download to the current working directory in native SQLite3 format. To import a local database, read [the documentation on importing data](/d1/features/import-export-data/) to D1.
89
+
The database backup will be download to the current working directory in native SQLite3 format. To import a local database, read [the documentation on importing data](d1/best-practices/import-export-data/) to D1.
Copy file name to clipboardExpand all lines: src/content/docs/d1/reference/generated-columns.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ D1 allows you to define generated columns based on the values of one or more oth
10
10
11
11
This allows you to normalize your data as you write to it or read it from a table, making it easier to query and reducing the need for complex application logic.
12
12
13
-
Generated columns can also have [indexes defined](/d1/features/use-indexes/) against them, which can dramatically increase query performance over frequently queried fields.
13
+
Generated columns can also have [indexes defined](d1/best-practices/use-indexes/) against them, which can dramatically increase query performance over frequently queried fields.
Copy file name to clipboardExpand all lines: src/content/docs/d1/sql-api/foreign-keys.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ By default, D1 enforces that foreign key constraints are valid within all querie
16
16
17
17
## Defer foreign key constraints
18
18
19
-
When running a [query](/d1/worker-api/), [migration](/d1/reference/migrations/) or [importing data](/d1/features/import-export-data/) against a D1 database, there may be situations in which you need to disable foreign key validation during table creation or changes to your schema.
19
+
When running a [query](/d1/worker-api/), [migration](/d1/reference/migrations/) or [importing data](d1/best-practices/import-export-data/) against a D1 database, there may be situations in which you need to disable foreign key validation during table creation or changes to your schema.
20
20
21
21
D1's foreign key enforcement is equivalent to SQLite's `PRAGMA foreign_keys = on` directive. Because D1 runs every query inside an implicit transaction, user queries cannot change this during a query or migration.
0 commit comments