You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/hpc/ranch.md
+13-11Lines changed: 13 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Ranch User Guide
2
-
*Last update: September 23, 2025*
2
+
*Last update: January 26, 2025*
3
3
4
4
## Notices { #notices }
5
5
@@ -9,10 +9,6 @@
9
9
Consult the [Ranch Data Migration Guide](../../hpc/ranch-migration-2025) for important information and directions..
10
10
11
11
12
-
!!! important
13
-
**09/24/2025**: This user guide is in progress as Ranch transitions to its new self.
14
-
15
-
16
12
## Introduction { #intro }
17
13
18
14
TACC's High Performance Computing (HPC) systems are used primarily for scientific computing and while their disk systems are large, they are unable to store the long-term final data generated on these systems. The Ranch archive system fills this need for high capacity long-term storage, by providing a massive high performance file system and tape-based backing store designed, implemented, and supported specifically for archival purposes.
@@ -38,11 +34,11 @@ Since we support open science, data stored in Ranch are NOT supposed to be sensi
38
34
39
35
### System Configuration { #intro-configuration }
40
36
41
-
The 2025 Ranch system has a 4PB flash front end disk system from Dell, with a 16PB Dell ECS storage system for files we want to remain on disk, and two Spectra Logic 15 frame TFinity tape libraries for back end storage. We currently have 20 LTO-9 tape drives, but we will swap in 16 LTO-10 drives in the near future. The front end flash system has Versity’s ScoutFS (SCale-OUT File System) filesystem on it. The archive management is done by Versity’s ScoutAM (SCale-OUT Archival Manager) product.
37
+
The 2025 Ranch system has a 4PB flash front end disk system from Dell, with a 16PB Dell ECS storage system for files we want to remain on disk, and two Spectra Logic 15 frame TFinity tape libraries for back end storage. We currently have 20 LTO-9 tape drives, but we will swap in 16 LTO-10 drives in the near future. The front end flash system has Versity's ScoutFS (SCale-OUT File System) filesystem on it. The archive management is done by Versity's ScoutAM (SCale-OUT Archival Manager) product.
42
38
43
-
The 2019 to 2025 Ranch system ("Old Ranch"), which is being replaced, has a frontend DDN SFA14K DCR (Declustered RAID) storage system, which is managed by Quantum’s StorNext file system. The raw capacity is approximately 30PB, of which 17PB is user-facing. File metadata is stored on a Quantum SSD-based appliance. The back-end tape library, to which files automatically migrate after they have been inactive (neither modified nor accessed) on disk for a period of time, is a Quantum Scalar i6000, with 24 LTO-8 tape drives. Each tape has an uncompressed capacity of approximately 12.5TB.
39
+
The 2019 to 2025 Ranch system ("Old Ranch"), which is being replaced, has a frontend DDN SFA14K DCR (Declustered RAID) storage system, which is managed by Quantum's StorNext file system. The raw capacity is approximately 30PB, of which 17PB is user-facing. File metadata is stored on a Quantum SSD-based appliance. The back-end tape library, to which files automatically migrate after they have been inactive (neither modified nor accessed) on disk for a period of time, is a Quantum Scalar i6000, with 24 LTO-8 tape drives. Each tape has an uncompressed capacity of approximately 12.5TB.
44
40
45
-
The previous iteration of the Ranch system was based on Oracle’s HSM software, with two SL8500 libraries, each with 20,000 tape slots. This Oracle system will remain as a legacy system while we migrate relevant data from Oracle HSM to the new Quantum environment.
41
+
The previous iteration of the Ranch system was based on Oracle's HSM software, with two SL8500 libraries, each with 20,000 tape slots. This Oracle system will remain as a legacy system while we migrate relevant data from Oracle HSM to the new Quantum environment.
46
42
47
43
## System Access { #access }
48
44
@@ -73,7 +69,7 @@ The best way to bundle files is to use the UNIX `tar` or `gtar` commands to crea
73
69
!!! tip
74
70
From careful auditing of past performance (predominantly the total retrieval time for a given data set until completion), we strongly recommend an average file size within Ranch of between 300GB to 4TB.
75
71
76
-
Users should be using `tar` or `gtar` to achieve file sizes in this range before placing them in Ranch. Manipulating smaller files will detrimentally affect performance during both storage and retrieval. For example, retrieval time of a 120TB data set comprised of `tar` files of 300GB can be an order of magnitude faster, or more, than the retrieval of the same data stored in its original form as individual files of 1GB or less.
72
+
Users should employ the `tar` or `gtar` utilities to achieve file sizes in this range before placing them in Ranch. Manipulating smaller files will detrimentally affect performance during both storage and retrieval. For example, retrieval time of a 120TB data set comprised of `tar` files of 300GB can be an order of magnitude faster, or more, than the retrieval of the same data stored in its original form as individual files of 1GB or less.
77
73
78
74
The new Versity-based environment is designed to meet the demand of retrieving multi-TB to PB-sized data sets in hours or days, rather than in weeks, which is possible only when the data set is stored into files with an average file size set optimally as described above.
79
75
@@ -103,13 +99,19 @@ It is your responsibility to keep the file count below the 50,000 quota by using
103
99
104
100
### Monitor your Disk Usage and File Counts { #organizing-quotas }
105
101
106
-
Users can check their current and historical Ranch usage by looking at the contents of the `HSM_usage` file in their Ranch user directory. Note that this file contains quota, on-disk, and on-tape, usage information for the directory it is in and all those beneath it.
102
+
Users can check their current filesystem usage with the following command:
103
+
104
+
```cmd-line
105
+
ranch$ samcli quota use -H
106
+
```
107
+
108
+
Users can examine their historical Quantum Ranch usage by looking at the contents of the `HSM_usage` file in their OldRanchData directory. Note that this file contains quota, on-disk, and on-tape, usage information for the directory it is in and all those beneath it.
107
109
108
110
```cmd-line
109
111
ranch$ tail ~/HSM_usage
110
112
```
111
113
112
-
This file is updated nightly. Each entry also shows the date and time of its update. **Do not delete or edit this file.** Note that the format of the file has changed over time, and may again as necessary, to provide better information and improved readability for users.
114
+
Each entry also shows the date and time of its update. **Do not delete or edit this file.** Note that the format of the file has changed over time, and may again as necessary, to provide better information and improved readability for users.
0 commit comments