Skip to content

Commit a2b0017

Browse files
committed
Formatting bullet indentation fixes
1 parent 5b89395 commit a2b0017

File tree

3 files changed

+23
-22
lines changed

3 files changed

+23
-22
lines changed

docs/network-troubleshooting/osg-debugging-document.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,10 @@
22

33
_Edited By: J. Zurawski – Internet2, S. McKee – University of Michigan_
44

5-
_February 4_
6-
_# th_ _2013_
5+
_February 4th 2013_
6+
7+
!!! note
8+
This document is old but still may have useful information. Many tools it references may no longer be supported or available.
79

810
# Abstract
911

docs/perfsonar/deployment-models.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -3,26 +3,26 @@
33
The primary motivation for perfSONAR deployment is to test isolation, i.e. only one end-to-end test should run on a host at a time. This ensures that the test results are not impacted by the other tests. Otherwise it is much more difficult to interpret test results, which may vary due to host effects rather then network effects. Taking this into account it means that perfSONAR measurement tools are much more accurate running on a dedicated hardware and while it may be useful to run them on other hosts such as Data Transfer Nodes the current recommendation is to have specific measurement machine. In addition, as bandwidth testing could impact latency testing, we recommend to deploy two different nodes, each focused on specific set of tests. The following deployment options are currently available:
44

55
* **Bare metal** - preffered option in one of two possible configurations:
6-
* Two bare metal servers, one for latency node, one for bandwidth node
7-
* One bare metal server running both latency and bandwidth node together provided that there are two NICs available, please refer to [dual NIC](#multiple-nic-network-interface-card-guidance) section for more details on this.
6+
* Two bare metal servers, one for latency node, one for bandwidth node
7+
* One bare metal server running both latency and bandwidth node together provided that there are two NICs available, please refer to [dual NIC](#multiple-nic-network-interface-card-guidance) section for more details on this.
88
* **Virtual Machine** - if bare metal is not available then it is also possible to run perfSONAR on a VM, however there are a set of additional requirements to fulfill:
9-
* Full-node VM is strongly preferred, having 2 VMs (latency/bandwidth node) on a single bare metal. Mixing perfSONAR VM(s) with others might have an impact on the measurements and is therefore not recommended.
10-
* VM needs to be configured to have SR-IOV to NIC(s) as well as pinned CPUs to ensure bandwidth tests are not impacted (by hypervisor switching CPUs during the test)
11-
* Succesfull full speed local bandwidth test is highly recommended prior to putting the VM into production
9+
* Full-node VM is strongly preferred, having 2 VMs (latency/bandwidth node) on a single bare metal. Mixing perfSONAR VM(s) with others might have an impact on the measurements and is therefore not recommended.
10+
* VM needs to be configured to have SR-IOV to NIC(s) as well as pinned CPUs to ensure bandwidth tests are not impacted (by hypervisor switching CPUs during the test)
11+
* Succesfull full speed local bandwidth test is highly recommended prior to putting the VM into production
1212
* **Container** - perfSONAR has supported containers from version 4.1 (Q1 2018) and is documented at <https://docs.perfsonar.net/install_docker.html> but is not typically used in the same way as a full toolkit installation.
13-
* Docker perfSONAR test instance can however still be used by sites that run multiple perfSONAR instances on site for their internal testing as this deployment model allows to flexibly deploy a testpoint which can send results to a local measurement archive running on the perfSONAR toolkit node.
13+
* Docker perfSONAR test instance can however still be used by sites that run multiple perfSONAR instances on site for their internal testing as this deployment model allows to flexibly deploy a testpoint which can send results to a local measurement archive running on the perfSONAR toolkit node.
1414

1515
### perfSONAR Toolkit vs Testpoint
1616

1717
The perfSONAR team has documented the types of installations supported at <https://docs.perfsonar.net/install_options.html>. With the release of version 5, OSG/WLCG sites have a new option: instead of installing the full Toolkit sites can choose to install the Testpoint bundle.
1818

19-
* Pros
20-
* Simpler deployment when a local web interface is not needed and a central measurement archive is available.
21-
* Less resource intensive for both memory and I/O capacity.
22-
* Cons
23-
* Measurements are not stored locally
24-
* No web interface to use for configuration or adding local tests
25-
* Unable to show results in MaDDash
19+
* Pros
20+
* Simpler deployment when a local web interface is not needed and a central measurement archive is available.
21+
* Less resource intensive for both memory and I/O capacity.
22+
* Cons
23+
* Measurements are not stored locally
24+
* No web interface to use for configuration or adding local tests
25+
* Unable to show results in MaDDash
2626

2727
While sites are free to choose whatever deployment method they want, we would like to strongly recommend the use of perfSONAR's containerized testpoint. This method was chosen as a "best practice" recommendation because of the reduced resource constraints, less components and easier management.
2828

docs/perfsonar/installation.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ The following *additional* steps are needed to configure the toolkit to be used
3535
* Please register your nodes in GOCDB/OIM. For OSG sites, follow the details in OSG Topology below. For non-OSG sites, follow the details in [GOCDB](#register-perfsonar-service-in-gocdb)
3636
* Please ensure you have added or updated your [administrative information](http://docs.perfsonar.net/manage_admin_info.html)
3737
* You will need to configure your instance(s) to use the OSG/WLCG mesh-configuration. Please follow the steps below:
38-
* **For toolkit versions 5.0 and higher**, please run from the command line `psconfig remote add https://psconfig.opensciencegrid.org/pub/auto/<FQDN>`. Replace `<FQDN>` with the fully qualified domain name of your host, e.g., `psum01.aglt2.org`. To verify the configuration is correct, you can run `psconfig remote list`, which should show the URL configured, e.g.
38+
* **For toolkit versions 5.0 and higher**, please run from the command line `psconfig remote add https://psconfig.opensciencegrid.org/pub/auto/<FQDN>`. Replace `<FQDN>` with the fully qualified domain name of your host, e.g., `psum01.aglt2.org`. To verify the configuration is correct, you can run `psconfig remote list`, which should show the URL configured, e.g.
3939

4040
```json
4141
=== pScheduler Agent ===
@@ -46,7 +46,6 @@ The following *additional* steps are needed to configure the toolkit to be used
4646
}
4747
]
4848
```
49-
5049
* Please remove any old/stale URLs using `psconfig remote delete <URL>`
5150

5251
* If this is a **new instance** or you have changed the node's FQDN, you will need to notify `wlcg-perfsonar-support 'at' cern.ch` to add/update the hostname in one or more test meshes, which will then auto-configure the tests. Please indicate if you have preferences for which meshes your node should be included in (USATLAS, USCMS, ATLAS, CMS, LHCb, Alice, BelleII, etc.). You could also add any additional local tests via web interface (see [Configuring regular tests](http://docs.perfsonar.net/manage_regular_tests.html) for details). Please check which tests are auto-added via central meshes before adding any custom tests to avoid duplication.
@@ -97,11 +96,11 @@ You might not be able to access the page if you are not properly registered in G
9796

9897
* There are two service types for perfSONAR: net.perfSONAR.Bandwidth and net.perfSONAR.Latency. This is because we suggest t install two perfSONAR boxes at the site (one for latency tests and one for bandwidth tests) and therefore two distinct service endpoints should be published with two distinct service types. If the site can not afford sufficient hardware for the proposed setup, it can install a unique perfSONAR box, but still should publish both services types (with the same host in the "host name" field of the form).
9998
* For each form (i.e. for each service type) fill at least the important informations:
100-
* Hosting Site (drop-down menu, mandatory)
101-
* Service Type (drop-down menu, mandatory)
102-
* Host Name (free text, mandatory)
103-
* Host IP (free text, optional)
104-
* Description: (free text, optional) This field has a default value of your site name. It is used to "Label" your host in our MaDDash GUI. If you want to use this field please use something as short as possible uniquely identifying this instance.
99+
* Hosting Site (drop-down menu, mandatory)
100+
* Service Type (drop-down menu, mandatory)
101+
* Host Name (free text, mandatory)
102+
* Host IP (free text, optional)
103+
* Description: (free text, optional) This field has a default value of your site name. It is used to "Label" your host in our MaDDash GUI. If you want to use this field please use something as short as possible uniquely identifying this instance.
105104
* Check "N" when asked "Is it a beta service"
106105
* Check "Y" when asked "Is this service in production"
107106
* Check "Y" when asked "Is this service monitored"

0 commit comments

Comments
 (0)