Skip to content

Commit aedddd0

Browse files
authored
Update and rename Troubleshoot-data-retention-(TTL)-issues-with-expired-data-not-being-deleted-from-storage.md to troubleshoot-data-retention-issues-expired-data.md
1 parent 6ba0c90 commit aedddd0

File tree

1 file changed

+23
-12
lines changed

1 file changed

+23
-12
lines changed
Lines changed: 23 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@ ms.date: 05/06/2022
88

99
# Troubleshoot data retention (TTL) issues with expired data not being deleted from storage on Azure HDInsight
1010

11-
In HBase cluster, you may decide that you would like to remove data after it ages either to free some storage and save on costs as the older data is no longer needed, either to comply with regulations. When that is needed, you'll usually set TTL in a table at the ColumnFamily level to expire and automatically delete older data. While TTL can be set as well at cell level, setting it at ColumnFamily level is usually a more convenient option because the ease of administration and because a cell TTLs (expressed in ms) can't extend the effective lifetime of a cell beyond a ColumnFamily level TTL setting (expressed in seconds), so only required shorter retention times at cell level could benefit from setting cell level TTL.
11+
In HBase cluster, you may decide that you would like to remove data after it ages either to free some storage and save on costs as the older data is no longer needed, either to comply with regulations. When that is needed, you'll usually set TTL in a table at the ColumnFamily level to expire and automatically delete older data. While TTL can be set as well at cell level, setting it at ColumnFamily level is usually a more convenient option because the ease of administration and because a cell TTL (expressed in ms) can't extend the effective lifetime of a cell beyond a ColumnFamily level TTL setting (expressed in seconds), so only required shorter retention times at cell level could benefit from setting cell level TTL.
1212

1313
Despite setting TTL, you may notice sometimes that you don't obtain the desired effect, i.e. some data hasn't expired and/or storage size hasn't decreased.
1414

1515
## Prerequisites:
1616

17-
To prepare to follow the steps and commands below, open 2 ssh connections to HBase cluster:
17+
To prepare to follow the steps and commands below, open two ssh connections to HBase cluster:
1818
1) In one of the ssh sessions keeps the default bash shell;
1919
2) In the second ssh session launch HBase shell by running the command below:
2020

@@ -23,14 +23,14 @@ hbase shell
2323
```
2424
### Check if desired TTL is configured and if expired data is removed from query result
2525

26-
Follow the steps below to understand where is the issue. Start by checking if he the behavior occurs for a specific table or for all the tables. If you're unsure whether the issue impacts all the tables or a specific table, just consider as example a specific table name for the start.
26+
Follow the steps below to understand where is the issue. Start by checking if the behavior occurs for a specific table or for all the tables. If you're unsure whether the issue impacts all the tables or a specific table, just consider as example a specific table name for the start.
2727
1) Check first that TTL has been configured for ColumnFamily for the target tables. Run the command below in the ssh session where you launched HBase shell and observe example and output below. One column family has TTL set to 50 seconds, the other ColumnFamily has no value configured for TTL, thus it appears as "FOREVER" (data in this column family isn't configured to expire):
2828

2929
```
3030
describe 'table_name'
3131
```
3232

33-
2) If not configured, default TTL is set to 'FOREVER'. There are 2 possibilities why data is not expired as expected and removed from query result:
33+
2) If not configured, default TTL is set to 'FOREVER'. There are two possibilities why data is not expired as expected and removed from query result:
3434
a) If TTL has any other value then 'FOREVER', observe the value for column family and note down the value in seconds(pay special attention to value correlated with the unit measure as cell TTL is in ms, but column family TTL is in seconds) to confirm if it is the expected one. If the observed value isn't correct, fix that first.
3535
b) If TTL value is 'FOREVER' for all column families, configure TTL as first step and afterwards monitor if data is expired as expected.
3636
3) If you establish that TTL is configured and has the correct value for the ColumnFamily, next step is to confirm that the expired data no longer shows up when doing table scans. When data expires, it should be removed and not show up in the scan table results. Run the below command in HBase shell to check:
@@ -56,37 +56,48 @@ b) If TTL value is 'FOREVER' for all column families, configure TTL as first ste
5656

5757
6) Based on the TTL configured for each ColumnFamily and how much data is written in the table for the target ColumnFamily, part of the data may still exist in MemStore and isn't written as StoreFile to storage. Thus, to make sure that the data is written to storage as StoreFile, before the maximum configured MemStore size is reached, you can run the following command in HBase shell to write data from MemStore to StoreFile immediately.
5858

59-
```
59+
```
6060
flush 'table_name'
61-
```
61+
```
6262

63-
7) Observe the result by running again in bash shell the command ``hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name"`` An additional store file is created compared to previous result output for each region where data is modified, the StoreFile will include current content of MemStore for that region:
63+
7) Observe the result by running again in bash shell the command
64+
65+
```
66+
hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name"
67+
```
68+
69+
8) An additional store file is created compared to previous result output for each region where data is modified, the StoreFile will include current content of MemStore for that region:
6470

6571

6672
### Check the number and size of StoreFiles per table per region after major compaction
6773

6874

69-
8) At this point, the data from MemStore has been written to StoreFile, in storage, but expired data may still exist in one or more of the current StoreFiles. Although minor compactions can help delete some of the expired entries, it is'nt guaranteed that it will remove all of them as minor compaction will usually not select all the StoreFiles for compaction, while major compaction will select all the StoreFiles for compaction in that region.
75+
9) At this point, the data from MemStore has been written to StoreFile, in storage, but expired data may still exist in one or more of the current StoreFiles. Although minor compactions can help delete some of the expired entries, it is'nt guaranteed that it will remove all of them as minor compaction will usually not select all the StoreFiles for compaction, while major compaction will select all the StoreFiles for compaction in that region.
7076

7177
Also, there's another situation when minor compaction may not remove cells with TTL expired. There's a property named MIN_VERSIONS and it defaults to 0 only (see in the above output from describe 'table_name' the property MIN_VERSIONS=>'0'). If this property is set to 0, the minor compaction will remove the cells with TTL expired. If this value is greater than 0, minor compaction may not remove the cells with TTL expired even if it touches the corresponding file as part of compaction. This property configures the min number of versions of a cell to keep, even if those versions have TTL expired.
7278

73-
9) To make sure expired data is also deleted from storage, we need to run a major compaction operation. The major compaction operation, when completed, will leave behind a single StoreFile per region. In HBase shell, run the command to execute a major compaction operation on the table:
79+
10) To make sure expired data is also deleted from storage, we need to run a major compaction operation. The major compaction operation, when completed, will leave behind a single StoreFile per region. In HBase shell, run the command to execute a major compaction operation on the table:
7480

7581

7682
```
7783
major_compact 'table_name'
7884
```
7985

80-
10) Depending on the table size, major compaction operation can take some time. Use the command below in HBase shell to monitor progress. If the compaction is still running when you execute the command below, you'll see the output "MAJOR", but if the compaction is completed, you will see the output "NONE":
86+
11) Depending on the table size, major compaction operation can take some time. Use the command below in HBase shell to monitor progress. If the compaction is still running when you execute the command below, you'll see the output "MAJOR", but if the compaction is completed, you will see the output "NONE":
8187

8288
```
8389
compaction_state 'table_name'
8490
```
8591

86-
11) When the compaction status appears as "NONE" in hbase shell, if you switch quickly to bash and run command ``hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name"``, you will notice that an extra StoreFile has been created in addition to previous ones per region per ColumnFamily and after several moments only the last created StoreFile is kept per region per column family:
92+
12) When the compaction status appears as "NONE" in hbase shell, if you switch quickly to bash and run command
93+
94+
```
95+
hdfs dfs -ls -R /hbase/data/default/table_name/ | grep "column_family_name"
96+
```
97+
You will notice that an extra StoreFile has been created in addition to previous ones per region per ColumnFamily and after several moments only the last created StoreFile is kept per region per column family:
8798

8899

89-
12) For the example region above, once the extra moments elapse, we can notice that one single StoreFile remained and the size occupied by this file on the storage is reduced as major compaction occurred and at this point any expired data that has not been deleted before(by another major compaction), will be deleted after running current major compaction operation:
100+
13) For the example region above, once the extra moments elapse, we can notice that one single StoreFile remained and the size occupied by this file on the storage is reduced as major compaction occurred and at this point any expired data that has not been deleted before(by another major compaction), will be deleted after running current major compaction operation:
90101

91102

92103
> [!NOTE]

0 commit comments

Comments
 (0)