Skip to content

data-team-uhn/cards

Repository files navigation

CARDS (Clinical Archive for Data Science)

Based on Apache Sling

Build Status

Prerequisites:

  • Java 11
  • Maven 3.8+
  • Python 2.5+ or Python 3.0+
  • psutil Python module (recommended)

Build:

mvn clean install

Additional options include:

mvn clean -Pclean-node to remove compiled frontend code

mvn clean -Pclean-instance to remove the sling/ folder generated by running Sling

mvn install -Pquick to skip many of the test steps

mvn install -Pskip-webpack to skip reinstalling webpack/yarn and dependencies and skip regenerating the frontend with webpack

mvn install -PautoInstallBundle to inject compiled code into a running instance of Sling at http://localhost:8080/ with admin:admin.

To specify a different password, use -Dsling.password=newPassword

To specify a different URL, use -Dsling.url=https://cards.server:8443/system/console (the URL must end with /system/console to work properly)

mvn install -PintegrationTests to run integration tests

A docker image can optionally be built with mvn install -Pdocker, if docker is installed, running, and the current user has access to the docker agent.

To build a self-contained Docker image:

mvn clean install -Pdocker -Ddocker.verbose -Ddocker.buildArg.build_jars=true

Run:

./start_cards.sh => the app will run at http://localhost:8080 (default port)

./start_cards.sh -p PORT to run at a different port

PROJECT_VERSION=1.0.0-SNAPSHOT PROJECT_NAME=project ./start_cards.sh to run a specific project. Projects are built on top of cards in their own repos.

Other supported parameters:

--permissions SCHEME to run with a different permission scheme. Currently supported schemes are:

  • open, the default, where all registered users can create, view and edit all records
  • trusted, where only users explicitly added to the TrustedUsers group can access records
  • ownership, where all users can create new records, but only the creator of a record can view and edit it

--dev to include the content browser (Composum), accessible at http://localhost:8080/bin/browser.html

--test to include the test questionnaires

--demo to include the demo warning banner

--clarity to enable clarity integration

--locking to enable the locking/sign off abilities

--mongo to use mongo DB for Oak storage

--debug to turn on remote debugging on port 5005

By default, the app will run with username admin and password admin.

In order to use "Vocabularies" section and load vocabularies from BioPortal (bioontology.org), a BIOPORTAL_APIKEY environment variable should be set to a valid BioPortal API key. You can request a new account if you don't already have one, and the API key can be found in your profile.

A Google API key enables access to Google services such as address autocomplete. GOOGLE_APIKEY environment variable should be set to a valid Google API key. You can obtain an API key if you don't already have one. Follow these steps to ensure the necessary services such as Places service are enabled for your key.

Running with Docker

If Docker is installed, then the build can also create a new image named cards/cards:latest if building with mvn install -Pdocker. This image only contains the necessary modules for running the basic CARDS application, and will not be able to use optional modules. For production images, you can use the build_self_contained.sh script:

cd Utilities/Packaging/Docker ; ./build_self_contained.sh cards/cards:latest (or replace latest with the version you want)

Test/Development Environments

CARDS can be ran as a single Docker container using the file system (instead of Mongo) as a data storage back-end for Apache Sling.

docker run --rm -e OAK_FILESYSTEM=true -p 127.0.0.1:8080:8080 -it cards/cards

Production Environments

Before the Docker container can be started, an isolated network providing MongoDB must be established. To do so:

docker network create cardsbridge
docker run --rm --network cardsbridge --name mongo -d mongo

For basic testing of the CARDS Docker image, run:

docker run --rm --network cardsbridge -d -p 8080:8080 cards/cards

However, since runtime data isn't persisted after the container stops, no changes will be permanently persisted this way. It is recommended to first create a permanent volume that can be reused between different image instantiations, and different image versions.

docker volume create --label server=production cards-production-volume

Then the container can be started with:

docker container run --rm --network cardsbridge --detach --volume cards-production-volume:/opt/cards/sling/ -p 8080:8080 --name cards-production cards/cards

Explanation:

  • docker container run creates and starts a new container
  • --rm will automatically remove the container after it is stopped
  • --network cardsbridge causes the container to connect to the network providing MongoDB
  • --detach starts the container in the background
  • --volume cards-production-volume:/opt/cards/sling/ mounts the volume named cards-production-volume at /opt/cards/sling/, where the application data is stored
  • -p 8080:8080 makes the local port 8080 forward to the 8080 port inside the container
    • you can also specify a specific local network, and a different local port, for example -p 127.0.0.1:9999:8080
    • the second port must be 8080
  • --name cards-production gives a name to the container, for easy identification
  • cards/cards is the name of the image

To enable developer mode, also add --env DEV=true to the docker run command.

To enable debug mode, also add --env DEBUG=true -p 5005:5005 to the docker run command. Note that the application will not start until a debugger is actually attached to the process on port 5005.

docker run --network cardsbridge -d -p 8080:8080 -p 5005:5005 --env DEV=true --env DEBUG=true --name cards-debug cards/cards

Environment variables

Environment variables that can be set to enable CARDS functionality can be found here.

Running with Docker-Compose

Docker-Compose can be employed to create a cluster of N MongoDB Shards, M MongoDB Replicas, and one CARDS instance, along with other service containers useful for testing or production deployments, like outbound email storage or forwarding, reverse or forward proxies, mssql or s3 databases...

Installing/Starting

  1. Before proceeding, ensure that the cards/cards Docker image has been built.
mvn clean install -Pdocker
  1. The optional ccmsk/neuralcr image can provide semantic analysis and term extraction from free text questions. Please build it based on the instructions available at https://github.com/ccmbioinfo/NeuralCR. Use the develop branch.

Download the pre-trained NCR models from here and un-tar. Create the directory NCR_MODEL under compose-cluster and copy in the file pmc_model_new.bin along with the directories 0 and 1 from the ncr_model_params directory.

  1. Now build the docker-compose environment.

Clone the cards-deploy-tool and install its requirements.

Run the script that generates a docker-compose file. To see a list of all its supported arguments, run:

python3 generate_compose_yaml.py --help

For example, for a test instance with the base CARDS project running on a filesystem storage, run:

python3 generate_compose_yaml.py --oak_filesystem --cards_docker_image cards/cards:latest --dev_docker_image --composum

For a production instance with the YourExperience project running on a clustered mongo database, run:

python3 generate_compose_yaml.py --mongo_cluster --shards 2 --replicas 3 --cards_docker_image cards/cards4yourexp:1.0.0
  1. Build and start the docker-compose environment.
docker-compose build
docker-compose up -d
  1. The CARDS instance should be available at http://localhost:8080/

5.1. To inspect the data split between the MongoDB shards:

docker-compose exec router mongosh
sh.status()
exit

Stopping gracefully, without losing data

  1. To stop the MongoDB/CARDS cluster:
docker-compose down

Restarting

  1. To restart the MongoDB/CARDS cluster while preserving the entered data from the previous execution:
docker-compose up -d

Cleaning up

  1. To stop the MongoDB/CARDS cluster and delete all entered data:
docker-compose down #Stop all containers
docker-compose rm #Remove all stopped containers
docker volume prune -f #Remove all stored data
./cleanup.sh #Remove the cluster configuration files

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors 17