Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 6 additions & 1 deletion docs/docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,13 @@
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output

html_theme = "sphinx_rtd_theme"
html_static_path = ['_static', '../../../build/docs']
html_static_path = ['_static', '../../../build/docs', "js"]

templates_path = ['templates']

html_js_files = [
'keybindings.js',
]

# -- Options for MyST's markdown -----------------------------------------------
# https://myst-parser.readthedocs.io/en/latest/configuration.html
Expand Down
39 changes: 39 additions & 0 deletions docs/docs/source/contributor/explanations/architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
## Architecture

Each appliance consists of 4 modules deployed in Tomcat containers as
separate [WAR](http://en.wikipedia.org/wiki/WAR_file_format_%28Sun%29)
files. For production systems, it is recommended that each module be
deployed in a separate Tomcat instance (thus yielding four Tomcat
processes). A sample storage configuration is outlined below where we\'d
use

1. Ramdisk for the short term store - in this storage stage, we\'d
store data at a granularity of an hour.
2. SSD/SAS drives for the medium term store - in this storage stage,
we\'d store data at a granularity of a day.
3. A NAS/SAN for the long term store - in this storage stage, we\'d
store data at a granularity of a year.

![Architecture of a single appliance](../../images/applarch.png)

A wide variety of such configurations is possible and supported. For
example, if you have a powerful enough NAS/SAN, you could write straight
to the long term store; bypassing all the stages in between.

The long term store is shown outside the appliance as an example of a
commonly deployed configuration. There is no necessity for the
appliances to share any storage; so both of these configurations are
possible.

```{figure} ../../images/clusterinto1lts.png
:alt: Multiple appliances into one long term store

Multiple appliances sending data into one long term store
```

```{figure} ../../images/clusterintodifflts.png
:alt: Multiple appliances into different long term stores

Multiple appliances sending data into different long term
stores
```
34 changes: 34 additions & 0 deletions docs/docs/source/contributor/explanations/clustering.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
## Clustering

While each appliance in a cluster is independent and self-contained, all
members of a cluster are listed in a special configuration file
(typically called [appliances.xml](../sysadmin/installguide#appliances_xml))
that is site-specific and identical across all appliances in the
cluster. The `appliances.xml` is a simple XML file that contains the
ports and URLs of the various webapps in that appliance. Each appliance
has a dedicated TCP/IP endpoint called `cluster_inetport` for cluster
operations like cluster membership etc.. One startup, the `mgmt` webapp
uses the `cluster_inetport` of all the appliances in `appliances.xml` to
discover other members of the cluster. This is done using TCP/IP only
(no need for broadcast/multicast support).

The business processes are all cluster-aware; the bulk of the
inter-appliance communication that happens as part of normal operation
is accomplished using JSON/HTTP on the other URLs defined in
`appliances.xml`. All the JSON/HTTP calls from the mgmt webapp are also
available to you for use in scripting, see the section on
[scripting](#scripting).

The archiving functionality is split across members of the cluster; that
is, each PV that is being archived is being archived by one appliance in
the cluster. However, both data retrieval and business requests can be
dispatched to any random appliance in the cluster; the appliance has the
functionality to route/proxy the request accordingly.

![Appliance 1 proxies data retrieval request for PV being archived by appliance 2.](../../images/proxyrequest.png)

In addition, users do not need to allocate PVs to appliances when
requesting for new PVs be archived. The appliances maintain a small set
of metrics during their operation and use this in addition to the
measured event and storage rates to do an automated [Capacity Planning](../_static/javadoc/org/epics/archiverappliance/mgmt/archivepv/CapacityPlanningBPL.html)/load
balancing.
11 changes: 11 additions & 0 deletions docs/docs/source/contributor/explanations/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Explanations

Background on the archiver's internal architecture, clustering model, and other useful context for understanding the codebase before contributing.

```{toctree}
:maxdepth: 2

Architecture <architecture>
Clustering <clustering>
Useful info for contributors <usefulinfo>
```
61 changes: 61 additions & 0 deletions docs/docs/source/contributor/explanations/usefulinfo.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
## Information useful to developers

1. If you unfamiliar with servlet containers; here\'s a small overview
that a few collaborators found useful
- Reading up on a few basics will help; there are several good
sources of information on the net; but don\'t get bogged down by
the details.
- Please do use Eclipse/Netbeans/Intelij to navigate the code.
This makes life so much easier.
- To get a quick sense of what a class/interface does, you can use
the [javadoc](../_static/javadoc/index.html). Some attempts have been made to
have some Javadoc in most classes and all interfaces
- We use Tomcat purely as a servlet container; that is, a quick
way of servicing HTTP requests using Java code.
- A WAR file is basically a ZIP file (you can use unzip) with some
conventions. For example, all the libraries (.jar\'s) that the
application depends on will be located in the WEB-INF/lib
folder.
- The starting point for servlet\'s in a WAR file is the file
`WEB-INF/web.xml`. For example, in the mgmt.war\'s
`WEB-INF/web.xml`, you can see that all HTTP requests matching
the pattern `/bpl/*` are satisfied using the Java class
`org.epics.archiverappliance.mgmt.BPLServlet`.
- If you navigate to this class in Eclipse, you\'ll see that it is
basically a registry of BPLActions.
- For example, the HTTP request, `/mgmt/bpl/getAllPVs` is
satisfied using the `GetAllPVs` class. Breaking this down,
1. `/mgmt` gets you into the mgmt.war file.
2. `/bpl` gets you into the BPLServlet class.
3. `/getAllPVs` gets you into the GetAllPVs class.
- From a very high level
- The engine.war establishes Channel Access monitors and then
writes the data into the short term store (STS).
- The etl.war file moves data between stores - that is from
the STS to the MTS and from the MTS to the LTS and so on.
- The retrieval.war gathers data from all the stores, stitches
them together to satisfy data retrieval requests.
- The mgmt.war manages all the other three and holds
configuration state.
- In terms of configuration, the most important is the
`PVTypeInfo`; you can see what one looks like by looking at
<http://machine:17665/mgmt/bpl/getPVTypeInfo?pv=MYPV:111:BDES>
- The main interfaces are the ones in the
[`org.epics.archiverappliance`](../_static/javadoc/org/epics/archiverappliance/package-summary.html)
package.
- The
[ConfigService](../_static/javadoc/org/epics/archiverappliance/config/ConfigService.html)
class does all configuration management.
- The [customization guide](../../sysadmin/guides/customization) is also a good
guide to way in which this product can be customized.

## ConfigService

All of the configuration in the archiver appliance is handled thru
implementations of the
[ConfigService](../_static/javadoc/org/epics/archiverappliance/config/ConfigService.html)
interface. Each webapp has one instance of this interface and this
instance is dependency injected into the classes that need it. If all
else fails, you can create your implementation of the ConfigService and
register it in the servlet context
[listener](../_static/javadoc/org/epics/archiverappliance/config/ArchServletContextListener.html).
26 changes: 26 additions & 0 deletions docs/docs/source/contributor/guides/building.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
## Building

The EPICS archiver appliance is shared on
[GitHub](https://github.com/slacmshankar/epicsarchiverap) using Git as
the source control repository. We use [Gradle](http://gradle.org/) for
building. The default target builds the install package and the various
wars and places them into the `build/distributions` folder.

```bash
$ ls build/distributions
archappl_v1.1.0-31-ge02e1f1.dirty.tar.gz
```

The Gradle build script will build into the default build directory
`build`. You don\'t need to install Gradle, instead you can use the
wrapper as `./gradlew`, or install it and run from the `epicsarchiverap`
folder:

```bash
$ gradle
BUILD SUCCESSFUL in 16s
12 actionable tasks: 10 executed, 2 up-to-date
```

The build can then be found in `epicsarchiverap/build/distributions` or
the war files in `epicsarchiverap/build/libs`.
14 changes: 14 additions & 0 deletions docs/docs/source/contributor/guides/formatting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
## Formatting with Spotless

The gradle build script `build.gradle` includes the [Spotless Plugin](https://github.com/diffplug/spotless)
which tracks the
formatting of the code. To run the formatter run:

```bash
gradle spotlessApply
```

The build script checks that the changes in the current git branch are
up-to-date with `origin/master` branch. So make sure your local
`origin/master` is up-to-date with the [home repository](https://github.com/slacmshankar/epicsarchiverap) master
branch to pass the CI checks.
13 changes: 13 additions & 0 deletions docs/docs/source/contributor/guides/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# How-to guides

How to set up a development environment, build the project, run the tests, and follow the project's code style.

```{toctree}
:maxdepth: 2

Prerequisites <prereqs>
Building and testing <building>
Running Tomcat <runningtomcat>
Unit testing <unittesting>
Formatting <formatting>
```
51 changes: 51 additions & 0 deletions docs/docs/source/contributor/guides/prereqs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
## Prerequisites

Please see the [system requirements](../../sysadmin/references/system-requirements.md)
page for prerequisites to build and test the EPICS Archiver Appliance. An
installation of Tomcat is required to build successfully; this is
located using the environment variable `TOMCAT_HOME`. Use something like

```bash
[ epicsarchiverap ]$ echo $TOMCAT_HOME
/opt/local/tomcat/latest
[ epicsarchiverap ]$ ls -l $TOMCAT_HOME/
drwxr-x--- 3 mshankar cd 4096 Oct 29 18:25 bin
-rw-r----- 1 mshankar cd 19182 May 3 2019 BUILDING.txt
drwx------ 3 mshankar cd 254 Jul 29 14:41 conf
drwx------ 2 mshankar cd 238 May 22 15:43 conf_from_install
drwxr-xr-x+ 2 mshankar cd 238 May 22 15:44 conf_original
-rw-r----- 1 mshankar cd 5407 May 3 2019 CONTRIBUTING.md
drwxr-x--- 2 mshankar cd 4096 Sep 17 18:13 lib
-rw-r----- 1 mshankar cd 57092 May 3 2019 LICENSE
drwxr-x--- 2 mshankar cd 193 Nov 11 16:58 logs
-rw-r----- 1 mshankar cd 2333 May 3 2019 NOTICE
-rw-r----- 1 mshankar cd 3255 May 3 2019 README.md
-rw-r----- 1 mshankar cd 6852 May 3 2019 RELEASE-NOTES
-rw-r----- 1 mshankar cd 16262 May 3 2019 RUNNING.txt
drwxr-x--- 2 mshankar cd 30 Sep 17 18:19 temp
drwxr-x--- 11 mshankar cd 205 Nov 11 16:58 webapps
drwxr-x--- 3 mshankar cd 22 May 22 15:55 work
[ epicsarchiverap ]$
```

By default, Tomcat sets up a HTTP listener on port 8080. You can change
this in the Tomcat server.xml to avoid collision with other folks
running Tomcat. For example, here I have changed this to 17665.

```xml
<Connector port="17665" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
```

To run the unit tests, please make a copy of your Tomcat configuration
(preferably pristine) into a new folder called `conf_original.` The unit
tests that use Tomcat copy the conf_original folder to generate new
configurations for each test.

```bash
cd ${TOMCAT_HOME}
cp -R conf conf_original
```

Gradle will do this step for you if you forget.
12 changes: 12 additions & 0 deletions docs/docs/source/contributor/guides/runningtomcat.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
## Running Tomcat

Start Tomcat using the `catalina.sh run` or the `catalina.sh start`
commands. The `catalina.sh` startup script is found in the Tomcat bin
folder. `catalina.sh run` starts Tomcat and leaves it running in the
console so that you can Ctrl-C to terminate. `catalina.sh start` starts
Tomcat in the background and you will need to run `catalina.sh stop` to
stop the process.

To bring up the management app, bring up
<http://\<YourMachineHere\>:17665/mgmt/ui/index.html> in a recent
version of Firefox/Google chrome.
35 changes: 35 additions & 0 deletions docs/docs/source/contributor/guides/unittesting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
## Running the unit tests

Gradle creates temporary directories for all the unit tests. If you wish
to clean them first you can use `gradle clean`. You then have the
following options:

```bash
gradle test # Runs all unit tests except slow tests
gradle unitTests # Runs all unit tests
gradle epicsTests # Runs all integration tests that require only an epics installation
gradle integrationTests # Runs all tests that require a tomcat installation and optionally an epics installation
gradle flakyTests # Runs all tests that can fail due to system resources
gradle allTests # Runs all tests (not recommended)
```

Or run individual tests with:

```bash
gradle test -tests PolicyExecutionTest
gradle integrationTests --tests PvaGetArchivedPVsTest --info
```

If you cancel an integrationTest early, or it gets stuck for some reason
it\'s possible to kill any tomcats running with

```bash
gradle shutdownAllTomcats
```

If you wish to run the current development version locally for testing,
it\'s possible to use:

```bash
gradle testRun
```
9 changes: 9 additions & 0 deletions docs/docs/source/contributor/references/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# References

Reference documentation for the archiver codebase, including the generated Javadoc API reference.

```{toctree}
:maxdepth: 2

[TODO: Java docs]
```
8 changes: 8 additions & 0 deletions docs/docs/source/contributor/tutorials/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Tutorials

Step-by-step tutorials for new contributors getting started with the codebase.

```{toctree}
:maxdepth: 2

```
Loading
Loading