Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _partials/_dimensions_info.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ to an existing hypertable.
#### Samples

Hypertables must always have a primary range dimension, followed by an arbitrary number of additional
dimensions that can be either range or hash, Typically this is just one hash. For example:
dimensions that can be either range or hash. Typically, this is just one hash. For example:

```sql
SELECT add_dimension('conditions', by_range('time'));
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,4 +86,4 @@ Because each `batch` is an individual transaction, executing a policy in batches
[concurrent-refresh-policies]: /use-timescale/:currentVersion:/continuous-aggregates/refresh-policies/
[informational-views]: /api/:currentVersion:/informational-views/jobs/
[real-time-aggregation]: /use-timescale/:currentVersion:/continuous-aggregates/real-time-aggregates/
[utc-bucketing]: https://www.tigerdata.com/docs/use-timescale/:currentVersion:/time-buckets/about-time-buckets/
[utc-bucketing]: /use-timescale/:currentVersion:/time-buckets/about-time-buckets/#timezones
11 changes: 5 additions & 6 deletions api/hypertable/create_hypertable.md
Original file line number Diff line number Diff line change
Expand Up @@ -167,27 +167,26 @@ Subsequent data insertion and queries automatically leverage the UUIDv7-based pa
| Name | Type | Default | Required | Description |
|-------------|------------------|---------|-|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|`create_default_indexes`| `BOOLEAN` | `TRUE` | ✖ | Create default indexes on time/partitioning columns. |
|`dimension`| [DIMENSION_INFO][dimension-info] | - | ✔ | To create a `_timescaledb_internal.dimension_info` instance to partition a hypertable, you call [`by_range`][by-range] and [`by_hash`][by-hash]. |
|`dimension`| `DIMENSION_INFO` | - | ✔ | To create a `_timescaledb_internal.dimension_info` instance to partition a hypertable, you call [`by_range`][by-range] and [`by_hash`][by-hash]. **Note**: best practice is to not use additional dimensions, especially on $CLOUD_LONG.
|
|`if_not_exists` | `BOOLEAN` | `FALSE` | ✖ | Set to `TRUE` to print a warning if `relation` is already a hypertable. By default, an exception is raised. |
|`migrate_data`| `BOOLEAN` | `FALSE` | ✖ | Set to `TRUE` to migrate any existing data in `relation` in to chunks in the new hypertable. Depending on the amount of data to be migrated, setting `migrate_data` can lock the table for a significant amount of time. If there are [foreign key constraints][foreign-key-constraings] to other tables in the data to be migrated, `create_hypertable()` can run into deadlock. A hypertable can only contain foreign keys to another hypertable. `UNIQUE` and `PRIMARY` constraints must include the partitioning key. <br></br> Deadlock may happen when concurrent transactions simultaneously try to insert data into tables that are referenced in the foreign key constraints, and into the converting table itself. To avoid deadlock, manually obtain a [SHARE ROW EXCLUSIVE][share-row-exclusive] lock on the referenced tables before you call `create_hypertable` in the same transaction. <br></br> If you leave `migrate_data` set to the default, non-empty tables generate an error when you call `create_hypertable`. |
|`relation`| REGCLASS | - | ✔ | Identifier of the table to convert to a hypertable. |


<DimensionInfo />

## Returns

|Column|Type| Description |
|-|-|-------------------------------------------------------------------------------------------------------------|
|`hypertable_id`|INTEGER| The ID of the hypertable you created. |
|`created`|BOOLEAN| `TRUE` when the hypertable is created. `FALSE` when `if_not_exists` is `true` and no hypertable was created. |

[add-dimension]: /api/:currentVersion:/hypertable/add_dimension
[api-create-hypertable-arguments]: /api/:currentVersion:/hypertable/create_hypertable/#arguments
[by-hash]: /api/:currentVersion:/hypertable/create_hypertable/#by_hash
[by-range]: /api/:currentVersion:/hypertable/create_hypertable/#by_range
[by-range]: /api/:currentVersion:/hypertable/add_dimension/#by_range
[by-hash]: /api/:currentVersion:/hypertable/add_dimension/#by_hash
[chunk_interval]: /api/:currentVersion:/hypertable/set_chunk_time_interval/
[declarative-partitioning]: https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE
[dimension-info]: /api/:currentVersion:/hypertable/create_hypertable/#dimension-info
[foreign-key-constraings]: /use-timescale/:currentVersion:/schema-management/about-constraints/
[hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/
[hypertables-section]: /use-timescale/:currentVersion:/hypertables/
Expand Down
17 changes: 1 addition & 16 deletions integrations/debezium.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,6 @@ This page explains how to capture changes in your database and stream them using

## Configure your database to work with Debezium

<Tabs label="Integrate with Debezium" persistKey="source-database">

<Tab title="Self-hosted TimescaleDB" label="self-hosted">

To set up $SELF_LONG to communicate with Debezium:

<Procedure>
Expand All @@ -60,18 +56,7 @@ Set up Kafka Connect server, plugins, drivers, and connectors:

</Procedure>

</Tab>

<Tab title="Tiger Cloud" label="tiger-cloud">

Debezium requires logical replication to be enabled. Currently, this is not enabled by default on $SERVICE_LONGs.
We are working on enabling this feature as you read. As soon as it is live, these docs will be updated.

</Tab>

</Tabs>

And that is it, you have configured Debezium to interact with $COMPANY products.
And that is it, you have configured Debezium to interact with $TIMESCALE_DB.

[caggs]: /use-timescale/:currentVersion:/continuous-aggregates/
[debezium]: https://debezium.io/
Expand Down
2 changes: 1 addition & 1 deletion migrate/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ for live migration to work smoothly.

## Can I use $CLOUD_LONG instance as source for live migration?

No, $CLOUD_LONG cannot be used as a source database for live migration.
Yes, but logical replication must be enabled first. [Contact us](mailto:support@tigerdata.com) to enable.


## How can I exclude a schema/table from being replicated in live migration?
Expand Down
Loading