|
| 1 | + |
| 2 | +# Greene Spec Sheet |
| 3 | + |
| 4 | +[vast home page]: https://www.vastdata.com/ |
| 5 | + |
| 6 | +## Hardware Specs |
| 7 | + |
| 8 | +Please find Greene's hardware specification in detail at the [google sheets here](https://docs.google.com/spreadsheets/d/1czgPi6x8Qa5PNRIX_VLSt7mDpe8zgg5MRqdVmINJzxc?rm=minimal): |
| 9 | + |
| 10 | +:::tip |
| 11 | +Hover a mouse over a cell with a black triangle to see more details_ |
| 12 | +::: |
| 13 | + |
| 14 | +<iframe src="https://docs.google.com/spreadsheets/d/1czgPi6x8Qa5PNRIX_VLSt7mDpe8zgg5MRqdVmINJzxc?rm=minimal" width="100%" height="500"></iframe> |
| 15 | + |
| 16 | +## Mounted Storage Systems |
| 17 | + |
| 18 | +Please find the details on Greene's available storage offerings at the [google sheets here](https://docs.google.com/spreadsheets/d/1pYZ0YtN1fhMN7kxcGcm6U-HZxMKLRBXXr2BwemxeS7Y?rm=minimal): |
| 19 | + |
| 20 | +<iframe src="https://docs.google.com/spreadsheets/d/1pYZ0YtN1fhMN7kxcGcm6U-HZxMKLRBXXr2BwemxeS7Y?rm=minimal" width="100%" height="300"></iframe> |
| 21 | + |
| 22 | + |
| 23 | +## General Parallel File System (GPFS) |
| 24 | + |
| 25 | +The NYU HPC Clusters are served by a `General Parallel File System (GPFS)` storage cluster. GPFS is a high-performance clustered file system software developed by IBM that provides concurrent high-speed file access to applications executing on multiple nodes of clusters. |
| 26 | + |
| 27 | +The cluster storage runs on Lenovo Distributed Storage Solution DSS-G hardware: |
| 28 | + |
| 29 | + |
| 30 | +- **_2x DSS-G 202_** |
| 31 | + - 116 Solid State Drives (SSDs) |
| 32 | + - 464 TB raw storage |
| 33 | +- **_2x DSS-G 240_** |
| 34 | + - 668 Hard Disk Drives (HDDs) |
| 35 | + - 9.1 PB raw storage |
| 36 | + |
| 37 | +### GPFS Performance |
| 38 | + |
| 39 | +| | | |
| 40 | +| --- | --- | |
| 41 | +| Read Bandwidth | **_78 GB_** per second reads | |
| 42 | +| Write Bandwidth | **_42 GB_** per second writes | |
| 43 | +| I/O Performance | **_~650k_** Input/Output operations per second (IOPS) | |
| 44 | + |
| 45 | +## Flash Tier Storage (VAST) |
| 46 | + |
| 47 | +An all flash file system, using [VAST Flash storage][vast home page] is now available on Greene. Flash storage is optimal for computational workloads with high I/O rates. For example, if you have jobs to run with huge number of _tiny files_, VAST may be a good candidate. |
| 48 | + |
| 49 | +Please contact the team hpc@nyu.edu for more information. |
| 50 | + |
| 51 | +- NVMe Interface |
| 52 | +- _778 TB_ Total Storage |
| 53 | +- Available to **all** users as **read only** |
| 54 | +- **Write** access available to **approved** users only |
| 55 | + |
| 56 | +## Research Project Space (RPS) |
| 57 | + |
| 58 | +- Research Project Space (RPS) volumes provide working spaces for sharing data and work amongst project or lab members for long term research needs. |
| 59 | +- RPS directories are available on the Greene HPC cluster. |
| 60 | +- RPS is backed up. There is no file purging policy on RPS. |
| 61 | +- There is a _cost per TB per year_, and _inodes per year_ for RPS volumes. |
| 62 | + |
| 63 | +Please find more inforamtion at \[Research Project Space page]. |
| 64 | + |
| 65 | + |
| 66 | +## Data Transfer Nodes (gDTN) |
| 67 | + |
| 68 | +| | | |
| 69 | +| --- | --- | |
| 70 | +| Node Type | Lenovo SR630 | |
| 71 | +| Number of Nodes | 2 | |
| 72 | +| CPUs | 2x Intel Xeon Gold 6244 8C 150W 3.6 GHz Processor. |
| 73 | +| Memory | 192 GB (total) - 12x 16GB DDR4, 2933 MHz |
| 74 | +| Local Disk | 1x 1.92 TB SSD |
| 75 | +| Infiniband Interconnect | 1x Mellanox ConnectX-6 HDR100/100GbE VPI 1-Port x16 PCIe 3.0 HCA |
| 76 | +| Ethernet Connectivity to the NYU High-Speed Research Network (HSRN) | 200 Gbit - 1x Mellanox ConnectX-5 EDR IB/100GbE VPI Dual-Port x16 PCIe 3.0 HCA |
0 commit comments