I currently have 60 days of data stored in the DataNode(It is currently occupying 2.5 TB of storage space.), and I’d like to clean it up to retain only 20 days of data. #6425
-
What type of bug is this?Other What subsystems are affected?Datanode Minimal reproduce step我通过设置public数据库的 ttl : ALTER DATABASE public SET 'ttl'='40d'; 并未自动清理掉. 请问我应该怎么做呢? 我最终目的是清理掉前20天的数据, 因为数据量太大了. 我只需要保留40天即可 What did you expect to see?保留40天的数据即可. 清理掉前20天的数据 What did you see instead?执行了 ALTER DATABASE public SET 'ttl'='40d'; 并没有自动清理掉前20天的数据 What operating system did you use?linux k8s What version of GreptimeDB did you use?greptimedb:v0.14.0 Relevant log output and stack trace |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
数据库的 TTL 是一个 hint,在下一次 compaction 触发的时候会清理过期的数据文件,如果想立刻清理可以手动触发 compaction |
Beta Was this translation helpful? Give feedback.
-
感谢回复. |
Beta Was this translation helpful? Give feedback.
-
请问,是否有 greptimedb相关的微信群/ 钉钉群, 群内咨询效率更高一些 |
Beta Was this translation helpful? Give feedback.
-
hello~可以添加小助手微信:greptime,邀请您进入技术交流群 |
Beta Was this translation helpful? Give feedback.
-
我的版本是 0.14.0 执行 ADMIN COMPACT_TABLE('redis_db_keys', 'swcs', '3600'); 有报错: ![]() |
Beta Was this translation helpful? Give feedback.
Are u using metric engine? Please run
show create table redis_db_keys
to see the table definition.If you ingest data via Prometheus remote write, it's using metric engine currently. You have to run
admin compact_table('greptime_physical_table')
to schedule the compaction task for the physical table, because you can't compact the logical tables.