Skip to content

Commit 0bab078

Browse files
authored
Merge pull request #38 from NYU-ITS/minor_cleanup
minor cleanup
2 parents 8d31467 + 07351bc commit 0bab078

File tree

8 files changed

+43
-140
lines changed

8 files changed

+43
-140
lines changed

docs/hpc/01_getting_started/01_intro.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
---
2-
sidebar_position: 1
3-
---
4-
51
# Start here!
62

73
Welcome to the Frances HPC documentation! If you do not have an HPC account, please proceed to the next section that explains how you may be able to get one.

docs/hpc/01_getting_started/02_getting_and_renewing_an_account.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,5 @@
1-
---
2-
sidebar_position: 1
3-
---
4-
51
# Getting and Renewing an Account
62

7-
83
[nyu vpn link]: https://www.nyu.edu/life/information-technology/infrastructure/network-services/vpn.html
94

105
[nyu ims link]: https://identity.it.nyu.edu/

docs/hpc/01_getting_started/03_walkthrough_approve_hpc_account_request.md

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,3 @@
1-
---
2-
sidebar_position: 1
3-
---
4-
5-
61
# How to approve an HPC Account Request
72

83
When someone nominates you as their HPC sponsor, you should be notified by email. You can also [log into IIQ at any time](https://iiq.nyu.edu/identityiq), and if you have a request awaiting your approval, it will appear in your "Actions Items" box, as per the following screenshot:

docs/hpc/01_getting_started/04_walkthrough_renew_hpc_account_iiq.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
---
2-
sidebar_position: 1
3-
---
4-
51
# Renewing your HPC Account with IIQ
62

73
Login to the URL given below, using your netid/password, to create or manage HPC Account Requests:

docs/hpc/01_getting_started/05_hpc_accounts_external_collaborators.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
---
2-
sidebar_position: 1
3-
---
4-
51
# HPC Accounts for Sponsored External Collaborators
62

73
External (non-NYU) collaborators can access, with proper sponsorship, the NYU HPC Environment.

docs/hpc/03_navigating_the_cluster/linux_tutorial.md renamed to docs/hpc/03_navigating_the_cluster/01_linux_tutorial.mdx

Lines changed: 17 additions & 68 deletions
Original file line numberDiff line numberDiff line change
@@ -1,78 +1,21 @@
1-
---
2-
toc_max_heading_level: 2
3-
sidebar_position: 1
4-
---
5-
61
# Linux Tutorial
72

8-
- [Getting a new Account on the NYU HPC cluster](#getting-a-new-account-on-the-nyu-hpc-cluster)
9-
10-
- [Getting Started on HPC Greene Cluster](#getting-started-on-hpc-greene-cluster)
11-
12-
- [Available file systems on Greene](#available-file-systems-on-greene)
13-
14-
- [Basic Linux Commands](#basic-linux-commands)
15-
16-
- [Copying, moving or deleting files locally](#copying-moving-or-deleting-files-locally)
17-
18-
- [Text Editor (NANO)](#text-editor-nano)
19-
20-
- [Writing Scripts](#writing-scripts)
21-
22-
- [Setting execute permission with chmod](#setting-execute-permission-with-chmod)
23-
24-
## Getting a new Account on the NYU HPC cluster
25-
26-
It is expected of everyone to have an NYU HPC Cluster Account. If not follow the steps from \[Getting and Renewing an Account page] to apply for a new account.
27-
28-
## Getting Started on HPC Greene Cluster
29-
30-
In a Linux cluster, there are hundreds of computing nodes interconnected by high-speed networks. Linux operating system runs on each of the nodes individually. The resources are shared among many users for their technical or scientific computing purposes.
31-
32-
### The process to log into the Greene Cluster:
33-
34-
**NYU Campus:** From within the NYU network, that is from an on-campus location, or after you have establisehd a VPN connection with the NYU network, you can login to the HPC clusters directly.
35-
36-
**Off-campus:** The host name of Greene is _`greene.hpc.nyu.edu`_. Logging in to Greene is a two-stage process. The HPC clusters (Greene) are not directly visible to the internet (outside the NYU Network). If you are outside NYU's Network (off-campus) you must first login to a bastion host named _`gw.hpc.nyu.edu`_ .
37-
38-
From within the NYU network, that is, from an on-campus location, or after you are in the NYU's network via VPN, you can login to the HPC clusters directly. You do not need to login to the bastion host.
3+
:::tip[Prerequisites]
4+
Login to the Greene cluster as described in the section on [connecting to the HPC cluster](../02_connecting_to_hpc/01_connecting_to_hpc.md).
395

40-
To login into the HPC cluster ( Greene ), simply use:
41-
42-
```sh
43-
ssh <NYUNetID>@greene.hpc.nyu.edu
44-
```
45-
46-
- To access from Windows Operating System with PuTTY, please follow the steps at \[Accessing HPC via windows page]
47-
- Or To connect to VPN from Linux/Mac OS, please follow the steps at \[Accessing HPC via linux/MacOS page]
48-
49-
From an off-campus location or without a VPN (outside NYU-NET), logging in to the HPC clusters is a 2 step process:
50-
51-
1. Frist login to the bastion host, _`gw.hpc.nyu.edu`_. From a Mac or Linux workstation, this is a simple terminal command. Your password is the same password you use for NYU Home:
52-
53-
```sh
54-
ssh <NYUNetID>@gw.hpc.nyu.edu
55-
```
56-
57-
_Windows users will need to use Putty, see \[Accessing HPC via windows page]_
58-
59-
2. Next login to the cluster. For Greene, this is done with:
60-
61-
```sh
62-
ssh <NYUNetID>@greene.hpc.nyu.edu
63-
```
6+
:::
647

658
## Available file systems on Greene
669

6710
Files Systems for usage:
6811

6912
The NYU HPC clusters have multiple file systems for user's file storage needs. Each file system is configured differently to serve a different purpose.
7013

71-
| **Space** | **Environment Variabe** | **Space Purpose** | **Flushed** | **Allocation** (per user) |
14+
| **Space** | **Environment Variable** | **Purpose** | **Flush Policy** | **Allocation** (per user) |
7215
| :-- | :-- | :-- | :-- | :-- |
73-
| `/home` | $HOME | Program Development space; For storing small files, source code, scripts etc that are backed up | NO | 20GB |
74-
| `/scratch` | $SCRATCH | Computational Workspace; For storing large files/data, infrequent reads and writes | YES Files not accessed for 60 days are deleted | 5TB |
75-
| `/archive` | $ARCHIVE | Long Term Storage ( Cold storage ) | NO | 2TB |
16+
| `/home` | `$HOME` | Program Development space; For storing small files, source code, scripts etc that are backed up | NO | 20GB |
17+
| `/scratch` | `$SCRATCH` | Computational Workspace; For storing large files/data, infrequent reads and writes | YES Files not accessed for 60 days are deleted | 5TB |
18+
| `/archive` | `$ARCHIVE` | Long Term Storage ( Cold storage ) | NO | 2TB |
7619

7720
## Basic Linux Commands
7821

@@ -143,9 +86,9 @@ If you run "` cd `" with no arguments, you will be returned to your home directo
14386
| rmdir subdir/ | Remove subdir only if it's empty |
14487
| rm -r subdir/ | Recursively delete the directory subdir and everything else in it. ` Use it with care ! ` |
14588

146-
## Text Editor (NANO)
89+
## Text Editor
14790

148-
"nano" is a friendly text editor that can be used to edit the content of an existing file or create a new file. Here are some options used in nano editor.
91+
`nano` is a friendly text editor that can be used to edit the content of an existing file or create a new file. Here are some options used in nano editor.
14992

15093
| Options | Explanation |
15194
| :------ | :---------- |
@@ -201,7 +144,10 @@ In Unix, a file has three basic permissions, each of which can be set for three
201144

202145
- Execute permission (" x ") - numeric value 1.
203146

204-
> **_NOTE:_** When applied to a directory, execute permission refers to whether the directory can be entered with 'cd'
147+
:::info
148+
When applied to a directory, execute permission refers to whether the directory can be entered with 'cd'
149+
150+
:::
205151

206152
The three levels of user are:
207153

@@ -213,7 +159,10 @@ The three levels of user are:
213159

214160
You grant permissions with "` chmod who+what file `" and revoke them with "` chmod who-what file `".
215161

216-
> **_NOTICE:_** The first has "+" and the second "-"
162+
:::info
163+
The first has "+" and the second "-"
164+
165+
:::
217166

218167
Here "who" some combination of "u", "g" and "o" and what is some combination of "r", "w" and "x". So to set execute permission, as in the example above, we use:
219168

docs/hpc/03_navigating_the_cluster/hpc_foundations.md renamed to docs/hpc/03_navigating_the_cluster/02_hpc_foundations.mdx

Lines changed: 26 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,8 @@
1-
---
2-
toc_max_heading_level: 2
3-
sidebar_position: 2
4-
---
5-
61
# HPC Foundations
72

8-
The goal of this exercise is to help you understand the fundamentals **_A to Z_** on effecively navigating the cluster for your research or academic projects.
9-
10-
Before we begin this exercise please make sure you have access to the NYU HPC cluster, if not please review the [Accessing HPC page](../02_connecting_to_hpc/01_connecting_to_hpc.md).
11-
12-
Login to the Greene cluster with ssh at :
13-
> Accessible under NYU Net ( either via VPN or within campus network )
14-
```sh
15-
greene.hpc.nyu.edu
16-
```
3+
The goal of this exercise is to help you understand the fundamentals of effecively navigating the cluster for your research or academic projects. Before we begin this exercise please make sure you have access to the NYU HPC cluster, if not please review the section on [connecting to the HPC cluster](../02_connecting_to_hpc/01_connecting_to_hpc.md).
174

18-
Once logged in, you should notice the **_node_** which you are currently on from the _bash prompt_ as shown below :
5+
Login to the Greene cluster as described in the section on [connecting to the HPC cluster](../02_connecting_to_hpc/01_connecting_to_hpc.md). Once logged in, you should notice the **_node_** which you are currently on from the _bash prompt_ as shown below :
196

207
```sh
218
[pp2959@log-3 ~]$
@@ -87,10 +74,10 @@ As you can see, it is the same file, same directory, the same filesystem for all
8774

8875
Regardless of whichever login node you may end up on, **_all_** users have access to **_a common_** filesystem that is `/home`. It is important to understand that users read and write files to the same filesystem while logged in from any of the 4 login nodes.
8976

90-
> **_REMEMBER_**
91-
>
92-
> - `/home` is your `personal workspace` having a limited space
93-
> - It is intended as a space for `maintaining code bases only`
77+
:::warning
78+
- `/home` is your `personal workspace` having a limited space
79+
- It is intended as a space for `maintaining code bases only`
80+
:::
9481

9582
Now, `exit` from your current `shell instance` by running the command `exit`:
9683

@@ -100,15 +87,17 @@ logout
10087
Connection to log-1 closed.
10188
[pp2959@log-3 ~]$
10289
```
90+
::::info
91+
1. The first line tells you that you have logged out of your current **_bash shell_**
92+
2. The second line tells you that the **_ssh connection_** to log-1 has been **_closed_**
93+
3. Now you are back to your **_previous login_** node, in this example log-3, that is your previous **_bash shell_**
10394

104-
> **IMPORTANT - _Notice the output:_**
105-
>
106-
> 1. The first line tells you that you have logged out of your current **_bash shell_**
107-
> 2. The second line tells you that the **_ssh connection_** to log-1 has been **_closed_**
108-
> 3. Now you are back to your **_previous login_** node, in this example log-3, that is your previous **_bash shell_**
109-
>
110-
> **_Why is this imporatant to understand_** ?
111-
> Because this will build your foundations in understanding the different kinds of nodes that exists and how you should use them for your projects
95+
:::tip[Why is this imporatant to understand?]
96+
Because this will build your foundations in understanding the different kinds of nodes that exists and how you should use them for your projects
97+
98+
:::
99+
100+
::::
112101

113102
### Other File Systems
114103

@@ -157,7 +146,7 @@ The `/archive` Space:
157146

158147
- Never purged
159148

160-
## Running programs on a `login` node
149+
## Running programs on a login node
161150

162151
Login nodes. As the name implies are used for interacting with the cluster only. They are not equiped with compute heavy hardware or much memory, and hence you may run simple programs ( that can lag a bit ) but not compute heavy workloads.
163152

@@ -234,8 +223,7 @@ As you can see, we encounter an error like the one below:
234223
[pp2959@log-3 ~]$
235224
```
236225

237-
`apt-get` does not exist, this is because package managers are not allowed on the cluster as they require `root` privileges for installation.
238-
You will need to load pre-installed software pacakges with a command called `module`.
226+
`apt-get` is not available to users as it requires `root` priviliges which the users do not get.You will need to load pre-installed software pacakges with a command called `module`.
239227

240228
First, let's search for any versions of `lua` available by running the command `module spider <Software_Package>`:
241229

@@ -371,7 +359,7 @@ module --help
371359
> - `/home` filesystem and it's purpose
372360
> - Load necessary `modules` to run our programs
373361
374-
## Running Programs on a `Compute` Node
362+
## Running Programs on a compute node
375363

376364
The Greene cluster has over 100s of compute nodes equiped with all kinds of High Performance hardware such as x86 Intel, AMD server CPUs, and Nvidia, AMD server GPUs ( such as the H100s ).
377365

@@ -413,16 +401,14 @@ cm001.hpc.nyu.edu
413401
hello, world
414402
[pp2959@log-2 ~]$
415403
```
404+
:::info
405+
**_Read the Output carefully_**
406+
- This job is given an **_id_** that is `55744835`, this is called a `job id`.
407+
- The job `job 55744835` is `queued and waiting` to be scheduled on a compute node, since these nodes are expected to be busy based on demand, it may take some time for your job to be scheduled
408+
- Once the `job` gets `scheduled`, your program `lua hello.lua` gets run on a chosen `compute node(s)` and the program's output is printed back to your console
409+
- Based on your output, you may notice the name of the compute node that this program runs on, the node `cm001.hpc.nyu.ed` in this example is a CPU only node, you may notice a different node. You can find more details about the \[specific nodes here].
416410

417-
> **_Read the Output carefully_**
418-
>
419-
> 1. This job is given an **_id_** that is `55744835`, this is called a `job id`.
420-
>
421-
> 2. The job `job 55744835` is `queued and waiting` to be scheduled on a compute node, since these nodes are expected to be busy based on demand, it may take some time for your job to be scheduled
422-
>
423-
> 3. Once the `job` gets `scheduled`, your program `lua hello.lua` gets run on a chosen `compute node(s)` and the program's output is printed back to your console
424-
>
425-
> 4. Based on your output, you may notice the name of the compute node that this program runs on, the node `cm001.hpc.nyu.ed` in this example is a CPU only node, you may notice a different node. You can find more details about the \[specific nodes here].
411+
:::
426412

427413
**_Now how do we determine Or specify the amount of resources needed to run our `hello.lua` script ?_**
428414

@@ -1346,7 +1332,3 @@ salloc: Job allocation 56149430 has been revoked.
13461332
## Package Software with `Containers`
13471333
13481334
## Burst priority jobs to Cloud with `Burst nodes`
1349-
1350-
## `Secure Research Data` Environments
1351-
1352-

docs/hpc/03_navigating_the_cluster/navigating_the_cluster.md

Lines changed: 0 additions & 6 deletions
This file was deleted.

0 commit comments

Comments
 (0)