Skip to content

Commit 0141fbc

Browse files
Merge branch 'latest' into rest-api
2 parents ddba359 + 492b465 commit 0141fbc

File tree

7 files changed

+25
-17
lines changed

7 files changed

+25
-17
lines changed
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ If the column to be partitioned is a:
8989
- Another integer type: specify `partition_interval` as an integer that reflects the column's
9090
underlying semantics. For example, if this column is in UNIX time, specify `partition_interval` in milliseconds.
9191

92-
The partition type and default value depending on column type is:
92+
The partition type and default value depending on column type is:<a id="partition-types" href=""></a>
9393

9494
| Column Type | Partition Type | Default value |
9595
|------------------------------|------------------|---------------|

_partials/_timescaledb-gucs.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
| Name | Type | Default | Description |
22
| -- | -- | -- | -- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
3+
| `GUC_CAGG_HIGH_WORK_MEM_NAME` | `INTEGER` | `GUC_CAGG_HIGH_WORK_MEM_VALUE` | The high working memory limit for the continuous aggregate invalidation processing.<br />min: `64`, max: `MAX_KILOBYTES` |
4+
| `GUC_CAGG_LOW_WORK_MEM_NAME` | `INTEGER` | `GUC_CAGG_LOW_WORK_MEM_VALUE` | The low working memory limit for the continuous aggregate invalidation processing.<br />min: `64`, max: `MAX_KILOBYTES` |
35
| `auto_sparse_indexes` | `BOOLEAN` | `true` | The hypertable columns that are used as index keys will have suitable sparse indexes when compressed. Must be set at the moment of chunk compression, e.g. when the `compress_chunk()` is called. |
46
| `bgw_log_level` | `ENUM` | `WARNING` | Log level for the scheduler and workers of the background worker subsystem. Requires configuration reload to change. |
7+
| `cagg_processing_wal_batch_size` | `INTEGER` | `10000` | Number of entries processed from the WAL at a go. Larger values take more memory but might be more efficient.<br />min: `1000`, max: `10000000` |
58
| `compress_truncate_behaviour` | `ENUM` | `COMPRESS_TRUNCATE_ONLY` | Defines how truncate behaves at the end of compression. 'truncate_only' forces truncation. 'truncate_disabled' deletes rows instead of truncate. 'truncate_or_delete' allows falling back to deletion. |
69
| `compression_batch_size_limit` | `INTEGER` | `1000` | Setting this option to a number between 1 and 999 will force compression to limit the size of compressed batches to that amount of uncompressed tuples.Setting this to 0 defaults to the max batch size of 1000.<br />min: `1`, max: `1000` |
710
| `compression_orderby_default_function` | `STRING` | `"_timescaledb_functions.get_orderby_defaults"` | Function to use for calculating default order_by setting for compression |
@@ -11,11 +14,11 @@
1114
| `debug_bgw_scheduler_exit_status` | `INTEGER` | `0` | this is for debugging purposes<br />min: `0`, max: `255` |
1215
| `debug_compression_path_info` | `BOOLEAN` | `false` | this is for debugging/information purposes |
1316
| `debug_have_int128` | `BOOLEAN` | `#ifdef HAVE_INT128 true` | this is for debugging purposes |
14-
| `debug_require_batch_sorted_merge` | `BOOLEAN` | `false` | this is for debugging purposes |
17+
| `debug_require_batch_sorted_merge` | `ENUM` | `DRO_Allow` | this is for debugging purposes |
1518
| `debug_require_vector_agg` | `ENUM` | `DRO_Allow` | this is for debugging purposes |
1619
| `debug_require_vector_qual` | `ENUM` | `DRO_Allow` | this is for debugging purposes, to let us check if the vectorized quals are used or not. EXPLAIN differs after PG15 for custom nodes, and using the test templates is a pain |
20+
| `debug_skip_scan_info` | `BOOLEAN` | `false` | Print debug info about SkipScan distinct columns |
1721
| `debug_toast_tuple_target` | `INTEGER` | `/* bootValue = */ 128` | this is for debugging purposes<br />min: `/* minValue = */ 1`, max: `/* maxValue = */ 65535` |
18-
| `default_hypercore_use_access_method` | `BOOLEAN` | `false` | gettext_noop(Sets the global default for using Hypercore TAM when compressing chunks.) |
1922
| `enable_bool_compression` | `BOOLEAN` | `true` | Enable bool compression |
2023
| `enable_bulk_decompression` | `BOOLEAN` | `true` | Increases throughput of decompression, but might increase query memory usage |
2124
| `enable_cagg_reorder_groupby` | `BOOLEAN` | `true` | Enable group by clause reordering for continuous aggregates |
@@ -46,9 +49,9 @@
4649
| `enable_event_triggers` | `BOOLEAN` | `false` | Enable event triggers for chunks creation |
4750
| `enable_exclusive_locking_recompression` | `BOOLEAN` | `false` | Enable getting exclusive lock on chunk during segmentwise recompression |
4851
| `enable_foreign_key_propagation` | `BOOLEAN` | `true` | Adjust foreign key lookup queries to target whole hypertable |
49-
| `enable_hypercore_scankey_pushdown` | `BOOLEAN` | `true` | Enabling this setting might lead to faster scans when query qualifiers match Hypercore segmentby and orderby columns. |
5052
| `enable_job_execution_logging` | `BOOLEAN` | `false` | Retain job run status in logging table |
5153
| `enable_merge_on_cagg_refresh` | `BOOLEAN` | `false` | Enable MERGE statement on cagg refresh |
54+
| `enable_multikey_skipscan` | `BOOLEAN` | `true` | Enable SkipScan for multiple distinct inputs |
5255
| `enable_now_constify` | `BOOLEAN` | `true` | Enable constifying now() in query constraints |
5356
| `enable_null_compression` | `BOOLEAN` | `true` | Enable null compression |
5457
| `enable_optimizations` | `BOOLEAN` | `true` | Enable TimescaleDB query optimizations |
@@ -62,12 +65,10 @@
6265
| `enable_skipscan_for_distinct_aggregates` | `BOOLEAN` | `true` | Enable SkipScan for DISTINCT aggregates |
6366
| `enable_sparse_index_bloom` | `BOOLEAN` | `true` | This sparse index speeds up the equality queries on compressed columns, and can be disabled when not desired. |
6467
| `enable_tiered_reads` | `BOOLEAN` | `true` | Enable reading of tiered data by including a foreign table representing the data in the object storage into the query plan |
65-
| `enable_transparent_decompression` | `ENUM` | `1` | Enable transparent decompression when querying hypertable |
68+
| `enable_transparent_decompression` | `BOOLEAN` | `true` | Enable transparent decompression when querying hypertable |
6669
| `enable_tss_callbacks` | `BOOLEAN` | `true` | Enable ts_stat_statements callbacks |
70+
| `enable_uuid_compression` | `BOOLEAN` | `false` | Enable uuid compression |
6771
| `enable_vectorized_aggregation` | `BOOLEAN` | `true` | Enable vectorized aggregation for compressed data |
68-
| `hypercore_arrow_cache_max_entries` | `INTEGER` | `25000` | The max number of decompressed arrow segments that can be cached before entries are evicted. This mainly affects the performance of index scans on the Hypercore TAM when segments are accessed in non-sequential order.<br />min: `1`, max: `INT_MAX` |
69-
| `hypercore_copy_to_behavior` | `ENUM` | `HYPERCORE_COPY_NO_COMPRESSED_DATA` | Set to 'all_data' to return both compressed and uncompressed data via the Hypercore table when using COPY TO. Set to 'no_compressed_data' to skip compressed data. |
70-
| `hypercore_indexam_whitelist` | `STRING` | `"btree,hash"` | gettext_noop(List of index access method names supported by hypercore.) |
7172
| `last_tuned` | `STRING` | `NULL` | records last time timescaledb-tune ran |
7273
| `last_tuned_version` | `STRING` | `NULL` | version of timescaledb-tune used to tune |
7374
| `license` | `STRING` | `TS_LICENSE_DEFAULT` | Determines which features are enabled |
@@ -80,4 +81,4 @@
8081
| `skip_scan_run_cost_multiplier` | `REAL` | `1.0` | Default is 1.0 i.e. regularly estimated SkipScan run cost, 0.0 will make SkipScan to have run cost = 0<br />min: `0.0`, max: `1.0` |
8182
| `telemetry_level` | `ENUM` | `TELEMETRY_DEFAULT` | Level used to determine which telemetry to send |
8283

83-
Version: [2.21.0](https://github.com/timescale/timescaledb/releases/tag/2.21.0)
84+
Version: [2.22.0](https://github.com/timescale/timescaledb/releases/tag/2.22.0)

api/hypertable/add_dimension.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ api:
1010
products: [cloud, mst, self_hosted]
1111
---
1212

13-
import DimensionInfo from "versionContent/_partials/_dimension_info.mdx";
13+
import DimensionInfo from "versionContent/_partials/_dimensions_info.mdx";
1414

1515
# add_dimension()
1616

api/hypertable/create_hypertable.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,8 +9,8 @@ api:
99
products: [cloud, mst, self_hosted]
1010
---
1111

12+
import DimensionInfo from "versionContent/_partials/_dimensions_info.mdx";
1213
import Deprecated2200 from "versionContent/_partials/_deprecated_2_20_0.mdx";
13-
import DimensionInfo from "versionContent/_partials/_dimension_info.mdx";
1414

1515
# create_hypertable()
1616

api/hypertable/create_table.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ products: [cloud, mst, self_hosted]
1010
---
1111

1212
import Since2200 from "versionContent/_partials/_since_2_20_0.mdx";
13-
import DimensionInfo from "versionContent/_partials/_dimension_info.mdx";
13+
import DimensionInfo from "versionContent/_partials/_dimensions_info.mdx";
1414
import HypercoreDirectCompress from "versionContent/_partials/_hypercore-direct-compress.mdx";
1515

1616
# CREATE TABLE

lambda/redirects.js

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -849,6 +849,10 @@ module.exports = [
849849
from: "/latest/getting-started/setup",
850850
to: "https://docs.tigerdata.com/self-hosted/latest/install/"
851851
},
852+
{
853+
from: "/latest/using-timescaledb/backup",
854+
to: "https://docs.tigerdata.com/self-hosted/latest/backup-and-restore/"
855+
},
852856
{
853857
from: "/v0.9/faq",
854858
to: "https://docs.tigerdata.com/about/latest/"
@@ -885,6 +889,10 @@ module.exports = [
885889
from: "/latest/api#add_dimension",
886890
to: "https://docs.tigerdata.com/api/latest/hypertable/add_dimension/"
887891
},
892+
{
893+
from: "/latest/api#backup",
894+
to: "https://docs.tigerdata.com/self-hosted/latest/backup-and-restore/"
895+
},
888896
{
889897
from: "/timescaledb/latest/tutorials/grafana/grafana-variables/",
890898
to: "https://docs.tigerdata.com/integrations/latest/grafana/"
@@ -929,6 +937,10 @@ module.exports = [
929937
from: "/use-timescale/latest/compression/compression-methods",
930938
to: "https://docs.timescale.com/use-timescale/latest/hypercore/compression-methods/"
931939
},
940+
{
941+
from: "/use-timescale/latest/compression/troubleshooting/",
942+
to: "https://docs.tigerdata.com/use-timescale/latest/hypercore/troubleshooting/"
943+
},
932944
{
933945
from: "/use-timescale/latest/integrations/observability-alerting/grafana/installation/",
934946
to: "https://docs.tigerdata.com/integrations/latest/grafana/"

use-timescale/page-index/page-index.js

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -795,11 +795,6 @@ module.exports = [
795795
href: "modify-a-schema",
796796
excerpt: "Change the data schema in compressed chunks",
797797
},
798-
{
799-
title: "Troubleshooting",
800-
href: "troubleshooting",
801-
type: "placeholder",
802-
},
803798
],
804799
},
805800
],

0 commit comments

Comments
 (0)