Skip to content

Commit a53ebd0

Browse files
authored
Merge pull request #35 from NYU-ITS/hpc_systems
Created 2 pages related to old 'HPC Systems'
2 parents d7125e9 + 77b23d9 commit a53ebd0

File tree

6 files changed

+119
-22
lines changed

6 files changed

+119
-22
lines changed

docs/hardware_specs.mdx

Lines changed: 0 additions & 10 deletions
This file was deleted.

docs/navigating_the_cluster/hpc_foundations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Similar to `/home`, users have access to multiple filesystems that are :
120120
| /scratch | /scratch/**Net_ID**/ | General Storage | $SCRATCH
121121
| /archive | /archive/**Net_ID**/ | Cold Storage | $ARCHIVE
122122

123-
You will find more details about these filesystems at [Greene Storage Types page](../storage_specs.mdx).
123+
You will find more details about these filesystems at [Greene Storage Types page](../spec_sheet.mdx).
124124

125125
You can jump to your `/scratch` directory at `/scratch/Net_ID/` with the `cd` command as `cd /scratch/Net_ID`, Or you could simple use the `$SCRATCH` environment variable as:
126126

docs/spec_sheet.mdx

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
2+
# Greene Spec Sheet
3+
4+
[vast home page]: https://www.vastdata.com/
5+
6+
## Hardware Specs
7+
8+
Please find Greene's hardware specification in detail at the [google sheets here](https://docs.google.com/spreadsheets/d/1czgPi6x8Qa5PNRIX_VLSt7mDpe8zgg5MRqdVmINJzxc?rm=minimal):
9+
10+
:::tip
11+
Hover a mouse over a cell with a black triangle to see more details_
12+
:::
13+
14+
<iframe src="https://docs.google.com/spreadsheets/d/1czgPi6x8Qa5PNRIX_VLSt7mDpe8zgg5MRqdVmINJzxc?rm=minimal" width="100%" height="500"></iframe>
15+
16+
## Mounted Storage Systems
17+
18+
Please find the details on Greene's available storage offerings at the [google sheets here](https://docs.google.com/spreadsheets/d/1pYZ0YtN1fhMN7kxcGcm6U-HZxMKLRBXXr2BwemxeS7Y?rm=minimal):
19+
20+
<iframe src="https://docs.google.com/spreadsheets/d/1pYZ0YtN1fhMN7kxcGcm6U-HZxMKLRBXXr2BwemxeS7Y?rm=minimal" width="100%" height="300"></iframe>
21+
22+
23+
## General Parallel File System (GPFS)
24+
25+
The NYU HPC Clusters are served by a `General Parallel File System (GPFS)` storage cluster. GPFS is a high-performance clustered file system software developed by IBM that provides concurrent high-speed file access to applications executing on multiple nodes of clusters.
26+
27+
The cluster storage runs on Lenovo Distributed Storage Solution DSS-G hardware:
28+
29+
30+
- **_2x DSS-G 202_**
31+
- 116 Solid State Drives (SSDs)
32+
- 464 TB raw storage
33+
- **_2x DSS-G 240_**
34+
- 668 Hard Disk Drives (HDDs)
35+
- 9.1 PB raw storage
36+
37+
### GPFS Performance
38+
39+
| | |
40+
| --- | --- |
41+
| Read Bandwidth | **_78 GB_** per second reads |
42+
| Write Bandwidth | **_42 GB_** per second writes |
43+
| I/O Performance | **_~650k_** Input/Output operations per second (IOPS) |
44+
45+
## Flash Tier Storage (VAST)
46+
47+
An all flash file system, using [VAST Flash storage][vast home page] is now available on Greene. Flash storage is optimal for computational workloads with high I/O rates. For example, if you have jobs to run with huge number of _tiny files_, VAST may be a good candidate.
48+
49+
Please contact the team hpc@nyu.edu for more information.
50+
51+
- NVMe Interface
52+
- _778 TB_ Total Storage
53+
- Available to **all** users as **read only**
54+
- **Write** access available to **approved** users only
55+
56+
## Research Project Space (RPS)
57+
58+
- Research Project Space (RPS) volumes provide working spaces for sharing data and work amongst project or lab members for long term research needs.
59+
- RPS directories are available on the Greene HPC cluster.
60+
- RPS is backed up. There is no file purging policy on RPS.
61+
- There is a _cost per TB per year_, and _inodes per year_ for RPS volumes.
62+
63+
Please find more inforamtion at \[Research Project Space page].
64+
65+
66+
## Data Transfer Nodes (gDTN)
67+
68+
| | |
69+
| --- | --- |
70+
| Node Type | Lenovo SR630 |
71+
| Number of Nodes | 2 |
72+
| CPUs | 2x Intel Xeon Gold 6244 8C 150W 3.6 GHz Processor.
73+
| Memory | 192 GB (total) - 12x 16GB DDR4, 2933 MHz
74+
| Local Disk | 1x 1.92 TB SSD
75+
| Infiniband Interconnect | 1x Mellanox ConnectX-6 HDR100/100GbE VPI 1-Port x16 PCIe 3.0 HCA
76+
| Ethernet Connectivity to the NYU High-Speed Research Network (HSRN) | 200 Gbit - 1x Mellanox ConnectX-5 EDR IB/100GbE VPI Dual-Port x16 PCIe 3.0 HCA

docs/storage_specs.mdx

Lines changed: 0 additions & 9 deletions
This file was deleted.

docs/system_status.mdx

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
---
2+
hide_table_of_contents: true
3+
---
4+
5+
# Greene System Status
6+
7+
- [Resource Allocation and Queue (AMD nodes not included)](#resources-allocation-and-queue-amd-nodes-not-included)
8+
9+
- [Resource Allocation and Queue by partitions (AMD nodes not included)](#resource-allocation-and-queue-by-partitions-amd-nodes-not-included)
10+
11+
- [AMD Nodes System Status](#amd-nodes-system-status)
12+
13+
- [Storage System Status](#storage-system-status)
14+
15+
:::note
16+
To be able to see the panels below you need to be within NYU network (use VPN if you are not on campus)
17+
:::
18+
19+
## Resources Allocation and Queue (AMD nodes not included)
20+
21+
<iframe src="https://graphs-out.hpc.nyu.edu/d/CEVdMFR7z/nyu-hpc-public?orgId=1&theme=light&refresh=5m&&from=now-14h&to=now-5m&kiosk=tv" width="100%" height="900"></iframe>
22+
23+
24+
## Resource Allocation and Queue by partitions (AMD nodes not included)
25+
26+
27+
<iframe src="https://graphs-out.hpc.nyu.edu/d/vA6e2Kgnk/greene-cluster-load-by-partitions-nyu-hpc-public?orgId=1&theme=light&refresh=30m&from=now-14h&to=now-5m&kiosk=tv" width="100%" height="1000"></iframe>
28+
29+
## AMD Nodes System Status
30+
31+
<iframe src="https://graphs-out.hpc.nyu.edu/d/P3yyH0vnk/hudson-cluster-load-nyu-hpc-public?orgId=1&refresh=5m&theme=light&from=now-14h&to=now-5m&kiosk=tv" width="100%" height="800"></iframe>
32+
33+
## Storage System Status
34+
35+
Below you may find data for the following file system mounts
36+
37+
- GPFS file system: `/home`, `/scratch`, `/archive`
38+
- VAST file system: `/vast`
39+
40+
<iframe src="https://graphs-out.hpc.nyu.edu/d/0_16dHc7z/storage-nyu-hpc-public?orgId=1&theme=light&refresh=5m&from=now-14h&to=now-5m&kiosk=tv" width="100%" height="600"></iframe>
41+

docusaurus.config.ts

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,8 +46,6 @@ const config: Config = {
4646
{
4747
docs: {
4848
sidebarPath: './sidebars.ts',
49-
// Please change this to your repo.
50-
// Remove this to remove the "edit this page" links.
5149
editUrl:
5250
'https://github.com/facebook/docusaurus/tree/main/packages/create-docusaurus/templates/shared/',
5351
},
@@ -75,6 +73,7 @@ const config: Config = {
7573

7674
themeConfig: {
7775
// Replace with your project's social card
76+
docs: { sidebar : { hideable: true } },
7877
image: 'img/NYU.svg',
7978
navbar: {
8079
title: 'Research Technology Services',

0 commit comments

Comments
 (0)