Skip to content

Commit a8bda14

Browse files
Update HPC Cluster material
1 parent dcdaed8 commit a8bda14

File tree

1 file changed

+50
-114
lines changed

1 file changed

+50
-114
lines changed

source/tutorials.rst

Lines changed: 50 additions & 114 deletions
Original file line numberDiff line numberDiff line change
@@ -874,171 +874,107 @@ For instance, point clouds properties can be modified to show elevation and also
874874
ClusterODM, NodeODM, SLURM, with Singularity on HPC
875875
***************************************************
876876

877-
Let's say that we will get ClusterODM and NodeODM images in the same folder
878877

879-
Downloading and installing the images
880-
=====================================
878+
You can write a SLURM script to schedule and set up available nodes with NodeODM for the ClusterODM to be wired to if you are on the HPC. Using SLURM will decrease the amount of time and processes needed to set up nodes for ClusterODM each time. This provides an easier way for user to use ODM on the HPC.
881879

882-
In this example ClusterODM and NodeODM will be installed in $HOME/git
880+
To setup HPC with SLURM, you must make sure SLURM is installed.
883881

884-
ClusterODM
885-
----------
886-
887-
::
882+
SLURM script will be different from cluster to cluster, depending on which nodes in the cluster that you have. However, the main idea is we want to run NodeODM on each node once, and by default, each NodeODM will be running on port 3000. Apptainer will be taking available ports starting from port 3000, so if your node's port 3000 is open, by default NodeODM will be run on that node. After that, we want to run ClusterODM on the head node and connect the running NodeODMs to the ClusterODM. With that, we will have a functional ClusterODM running on HPC.
888883

889-
cd $HOME/git
890-
git clone https://github.com/OpenDroneMap/ClusterODM
891-
cd ClusterODM
892-
singularity pull --force --disable-cache docker://opendronemap/clusterodm:latest
884+
Here is an example of SLURM script assigning nodes 48, 50, 51 to run NodeODM. You can freely change and use it depending on your system:
893885

894-
ClusterODM image needs to be "installed"
895886
::
896887

897-
singularity shell --bind $PWD:/var/www clusterodm_latest.sif`
898-
899-
And then in the Singularity shell
900-
::
888+
#!/usr/bin/bash
889+
#source. bashrc
890+
#SBATCH --partition=8core
891+
#SBATCH --nodelist-node [48,50, 51]
892+
#SBATCH --time 20:00:00
901893

902-
cd /var/www
903-
npm install --production
904-
exit
894+
cd SHOME
895+
cd ODM/NodeODM/
905896

906-
NodeODM
907-
-------
897+
#Launch on Node 48
898+
srun --nodes-1 apptainer run --writable node/ &
908899

909-
::
900+
#Launch on node 50
901+
srun --nodes-1 apptainer run --writable node/ &
910902

911-
cd $HOME/git
912-
git clone https://github.com/OpenDroneMap/NodeODM
913-
cd NodeODMDM
914-
singularity pull --force --disable-cache docker://opendronemap/nodeodm:latest
903+
#Launch on node 51
904+
srun --nodes=1 apptainer run --writable node/ &
905+
wait
915906

916-
NodeODM image needs to be "installed"
917-
::
918907

919-
singularity shell --bind $PWD:/var/www nodeodm_latest.sif
908+
You can check for available nodes using sinfo:
920909

921-
And then in the Singularity shell
922910
::
923911

924-
cd /var/www
925-
npm install --production
926-
exit
912+
sinfo
927913

914+
Run the following command to schedule using the SLURM script:
928915

929-
930-
931-
Launching
932-
=========
933-
On two different terminals connected to the HPC , or with tmux (or screen...) a slurm script will start NodeODM instances.
934-
Then ClusterODM could be started
935-
936-
NodeODM
937-
-------
938-
Create a nodeodm.slurm script in $HOME/git/NodeODM with
939916
::
940917

941-
#!/usr/bin/bash
942-
#source .bashrc
943-
944-
945-
#SBATCH -J NodeODM
946-
#SBATCH --partition=ncpulong,ncpu
947-
#SBATCH --nodes=2
948-
#SBATCH --mem=10G
949-
#SBATCH --output logs_nodeodm-%j.out
950-
951-
cd $HOME/git/NodeODM
918+
sbatch sample.slurm
952919

953-
#Launched on first node
954-
srun --nodes=1 singularity run --bind $PWD:/var/www nodeodm_latest.sif $
955920

956-
#Launch on second node
921+
You can also check for currently running jobs using squeue:
957922

958-
srun --nodes=1 singularity run --bind $PWD:/var/www nodeodm_latest.sif $
959-
960-
wait
961-
962-
start this script with
963923
::
964924

965-
sbatch $HOME/git/NodeODM/nodeodm.slurm
925+
squeue -u $USER
966926

967-
logs of this script are written in $HOME/git/NodeODM/logs_nodeodm-XXX.out XXX is the slurm job number
968927

969-
970-
971-
ClusterODM
972-
----------
973-
Then you can start ClusterODM on the head node with
928+
Unfortunately, SLURM does not handle assigning jobs to the head node. Hence, if we want to run ClusterODM on the head node, we have to run it locally. After that, you can connect to the CLI and wire the NodeODMs to the ClusterODMs. Here is an example following the sample SLURM script:
974929

975930
::
976931

977-
cd $HOME/git/ClusterODM
978-
singularity run --bind $PWD:/var/www clusterodm_latest.sif
979-
980-
Connecting Nodes to ClusterODM
981-
==============================
982-
Use the following command to get the nodes names where NodeODM is running
983-
::
932+
telnet localhost 8080
933+
> NODE ADD node48 3000
934+
> NODE ADD node50 3000
935+
> NODE ADD node51 3000
936+
> NODE LIST
984937

985-
squeue -u $USER
986938

987-
ex : squeue -u $USER
988-
JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
989-
1829323 ncpu NodeODM bonaime R 24:19 2 ncpu[015-016]
939+
You should always check to make sure which ports are being used to run NodeODM if ClusterODM is not wired correctly.
990940

991-
In this case, NodeODM run on ncpu015 and ncpu016
941+
It is also possible to pre-populate nodes using JSON. If starting ClusterODM from apptainer or docker, the relevant JSON is available at `docker/data/nodes.json`. Contents might look similar to the following:
992942

993-
Web interface
994-
-------------
995-
ClusterODM administrative web interface could be used to wire NodeODMs to the ClusterODM.
996-
Open another shell window in your local machine and tunnel them to the HPC using the following command:
997943
::
998944

999-
ssh -L localhost:10000:localhost:10000 yourusername@hpc-address
1000-
Replace yourusername and hpc-address with your appropriate username and the hpc address.
945+
[
946+
{"hostname":"node48","port":"3000","token":""},
947+
{"hostname":"node50","port":"3000","token":""},
948+
{"hostname":"node51","port":"3000","token":""}
949+
]
1001950

1002-
Basically, this command will tunnel the port of the hpc to your local port.
1003-
After this, open a browser in your local machine and connect to http://localhost:10000.
1004-
Port 10000 is where ClusterODM's administrative web interface is hosted at.
1005-
Then NodeODMs could be add/deleted to ClusterODM
1006-
This is what it looks like :
1007951

1008-
.. figure:: images/clusterodm-admin-interface.png
1009-
:alt: Clusterodm admin interface
1010-
:align: center
952+
After finish hosting ClusterODM on the head node and finish wiring it to the NodeODM, you can try tunneling to see if ClusterODM works as expected. Open another shell window in your local machine and tunnel them to the HPC using the following command:
1011953

954+
::
1012955

956+
ssh -L localhost:10000:localhost:10000 user@hostname
1013957

1014-
telnet
1015-
------
1016-
You can connect to the ClusterODM CLI and wire the NodeODMs. For the previous example :
1017958

1018-
telnet localhost 8080
1019-
> NODE ADD ncpu015 3000
1020-
> NODE ADD ncpu016 3000
1021-
> NODE LIST
959+
Replace user and hostname with your appropriate username and the hpc address. Basically, this command will tunnel the port of the hpc to your local port. After this, open a browser in your local machine and connect to `http://localhost:10000`. Port 10000 is where ClusterODM's administrative web interface is hosted at. This is what it looks like:
1022960

961+
.. figure:: https://user-images.githubusercontent.com/70782465/214938402-707bee90-ea17-4573-82f8-74096d9caf03.png
962+
:alt: Screenshot of ClusterODM's administrative web interface
963+
:align: center
1023964

1024965

966+
Here you can check the NodeODMs status and even add or delete working nodes.
1025967

1026-
Using ClusterODM and its NodeODMs
1027-
=================================
968+
After that, do tunneling for port 3000 of the HPC to your local machine:
1028969

1029-
Open another shell window in your local machine and tunnel them to the HPC using the following command:
1030970
::
1031971

1032-
ssh -L localhost:10000:localhost:10000 yourusername@hpc-address
1033-
Replace yourusername and hpc-address with your appropriate username and the hpc address.
972+
ssh -L localhost:3000:localhost:3000 user@hostname
1034973

1035-
After this, open a browser in your local machine and connect to http://localhost:3000 with your browser
1036-
Here, you can Assign Tasks and observe the tasks' processes.
974+
Port 3000 is ClusterODM's proxy. This is the place we assign tasks to ClusterODM. Once again, connect to `http://localhost:3000` with your browser after tunneling. Here, you can Assign Tasks and observe the tasks' processes.
1037975

1038-
.. figure:: images/clusterodm-user-interface.png
1039-
:alt: Clusterodm user interface
976+
.. figure:: https://user-images.githubusercontent.com/70782465/214938234-113f99dc-f69e-4e78-a782-deaf94e986b0.png
977+
:alt: Screenshot of ClusterODM's jobs interface
1040978
:align: center
1041979

1042-
1043-
1044980
After adding images in this browser, you can press Start Task and see ClusterODM assigning tasks to the nodes you have wired to. Go for a walk and check the progress.

0 commit comments

Comments
 (0)