You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/perfsonar/deployment-models.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,26 +3,26 @@
3
3
The primary motivation for perfSONAR deployment is to test isolation, i.e. only one end-to-end test should run on a host at a time. This ensures that the test results are not impacted by the other tests. Otherwise it is much more difficult to interpret test results, which may vary due to host effects rather then network effects. Taking this into account it means that perfSONAR measurement tools are much more accurate running on a dedicated hardware and while it may be useful to run them on other hosts such as Data Transfer Nodes the current recommendation is to have specific measurement machine. In addition, as bandwidth testing could impact latency testing, we recommend to deploy two different nodes, each focused on specific set of tests. The following deployment options are currently available:
4
4
5
5
***Bare metal** - preffered option in one of two possible configurations:
6
-
* Two bare metal servers, one for latency node, one for bandwidth node
7
-
* One bare metal server running both latency and bandwidth node together provided that there are two NICs available, please refer to [dual NIC](#multiple-nic-network-interface-card-guidance) section for more details on this.
6
+
* Two bare metal servers, one for latency node, one for bandwidth node
7
+
* One bare metal server running both latency and bandwidth node together provided that there are two NICs available, please refer to [dual NIC](#multiple-nic-network-interface-card-guidance) section for more details on this.
8
8
***Virtual Machine** - if bare metal is not available then it is also possible to run perfSONAR on a VM, however there are a set of additional requirements to fulfill:
9
-
* Full-node VM is strongly preferred, having 2 VMs (latency/bandwidth node) on a single bare metal. Mixing perfSONAR VM(s) with others might have an impact on the measurements and is therefore not recommended.
10
-
* VM needs to be configured to have SR-IOV to NIC(s) as well as pinned CPUs to ensure bandwidth tests are not impacted (by hypervisor switching CPUs during the test)
11
-
* Succesfull full speed local bandwidth test is highly recommended prior to putting the VM into production
9
+
* Full-node VM is strongly preferred, having 2 VMs (latency/bandwidth node) on a single bare metal. Mixing perfSONAR VM(s) with others might have an impact on the measurements and is therefore not recommended.
10
+
* VM needs to be configured to have SR-IOV to NIC(s) as well as pinned CPUs to ensure bandwidth tests are not impacted (by hypervisor switching CPUs during the test)
11
+
* Succesfull full speed local bandwidth test is highly recommended prior to putting the VM into production
12
12
***Container** - perfSONAR has supported containers from version 4.1 (Q1 2018) and is documented at <https://docs.perfsonar.net/install_docker.html> but is not typically used in the same way as a full toolkit installation.
13
-
* Docker perfSONAR test instance can however still be used by sites that run multiple perfSONAR instances on site for their internal testing as this deployment model allows to flexibly deploy a testpoint which can send results to a local measurement archive running on the perfSONAR toolkit node.
13
+
* Docker perfSONAR test instance can however still be used by sites that run multiple perfSONAR instances on site for their internal testing as this deployment model allows to flexibly deploy a testpoint which can send results to a local measurement archive running on the perfSONAR toolkit node.
14
14
15
15
### perfSONAR Toolkit vs Testpoint
16
16
17
17
The perfSONAR team has documented the types of installations supported at <https://docs.perfsonar.net/install_options.html>. With the release of version 5, OSG/WLCG sites have a new option: instead of installing the full Toolkit sites can choose to install the Testpoint bundle.
18
18
19
-
* Pros
20
-
* Simpler deployment when a local web interface is not needed and a central measurement archive is available.
21
-
* Less resource intensive for both memory and I/O capacity.
22
-
* Cons
23
-
* Measurements are not stored locally
24
-
* No web interface to use for configuration or adding local tests
25
-
* Unable to show results in MaDDash
19
+
* Pros
20
+
* Simpler deployment when a local web interface is not needed and a central measurement archive is available.
21
+
* Less resource intensive for both memory and I/O capacity.
22
+
* Cons
23
+
* Measurements are not stored locally
24
+
* No web interface to use for configuration or adding local tests
25
+
* Unable to show results in MaDDash
26
26
27
27
While sites are free to choose whatever deployment method they want, we would like to strongly recommend the use of perfSONAR's containerized testpoint. This method was chosen as a "best practice" recommendation because of the reduced resource constraints, less components and easier management.
Copy file name to clipboardExpand all lines: docs/perfsonar/installation.md
+6-7Lines changed: 6 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ The following *additional* steps are needed to configure the toolkit to be used
35
35
* Please register your nodes in GOCDB/OIM. For OSG sites, follow the details in OSG Topology below. For non-OSG sites, follow the details in [GOCDB](#register-perfsonar-service-in-gocdb)
36
36
* Please ensure you have added or updated your [administrative information](http://docs.perfsonar.net/manage_admin_info.html)
37
37
* You will need to configure your instance(s) to use the OSG/WLCG mesh-configuration. Please follow the steps below:
38
-
***For toolkit versions 5.0 and higher**, please run from the command line `psconfig remote add https://psconfig.opensciencegrid.org/pub/auto/<FQDN>`. Replace `<FQDN>` with the fully qualified domain name of your host, e.g., `psum01.aglt2.org`. To verify the configuration is correct, you can run `psconfig remote list`, which should show the URL configured, e.g.
38
+
***For toolkit versions 5.0 and higher**, please run from the command line `psconfig remote add https://psconfig.opensciencegrid.org/pub/auto/<FQDN>`. Replace `<FQDN>` with the fully qualified domain name of your host, e.g., `psum01.aglt2.org`. To verify the configuration is correct, you can run `psconfig remote list`, which should show the URL configured, e.g.
39
39
40
40
```json
41
41
=== pScheduler Agent ===
@@ -46,7 +46,6 @@ The following *additional* steps are needed to configure the toolkit to be used
46
46
}
47
47
]
48
48
```
49
-
50
49
* Please remove any old/stale URLs using `psconfig remote delete <URL>`
51
50
52
51
* If this is a **new instance** or you have changed the node's FQDN, you will need to notify `wlcg-perfsonar-support 'at' cern.ch` to add/update the hostname in one or more test meshes, which will then auto-configure the tests. Please indicate if you have preferences for which meshes your node should be included in (USATLAS, USCMS, ATLAS, CMS, LHCb, Alice, BelleII, etc.). You could also add any additional local tests via web interface (see [Configuring regular tests](http://docs.perfsonar.net/manage_regular_tests.html) for details). Please check which tests are auto-added via central meshes before adding any custom tests to avoid duplication.
@@ -97,11 +96,11 @@ You might not be able to access the page if you are not properly registered in G
97
96
98
97
* There are two service types for perfSONAR: net.perfSONAR.Bandwidth and net.perfSONAR.Latency. This is because we suggest t install two perfSONAR boxes at the site (one for latency tests and one for bandwidth tests) and therefore two distinct service endpoints should be published with two distinct service types. If the site can not afford sufficient hardware for the proposed setup, it can install a unique perfSONAR box, but still should publish both services types (with the same host in the "host name" field of the form).
99
98
* For each form (i.e. for each service type) fill at least the important informations:
100
-
* Hosting Site (drop-down menu, mandatory)
101
-
* Service Type (drop-down menu, mandatory)
102
-
* Host Name (free text, mandatory)
103
-
* Host IP (free text, optional)
104
-
* Description: (free text, optional) This field has a default value of your site name. It is used to "Label" your host in our MaDDash GUI. If you want to use this field please use something as short as possible uniquely identifying this instance.
99
+
* Hosting Site (drop-down menu, mandatory)
100
+
* Service Type (drop-down menu, mandatory)
101
+
* Host Name (free text, mandatory)
102
+
* Host IP (free text, optional)
103
+
* Description: (free text, optional) This field has a default value of your site name. It is used to "Label" your host in our MaDDash GUI. If you want to use this field please use something as short as possible uniquely identifying this instance.
105
104
* Check "N" when asked "Is it a beta service"
106
105
* Check "Y" when asked "Is this service in production"
107
106
* Check "Y" when asked "Is this service monitored"
0 commit comments