Skip to content
This repository was archived by the owner on May 6, 2024. It is now read-only.

Commit 3638c5a

Browse files
Merge branch 'DEV'
# Conflicts: # README.md # pytaskmanager/__init__.py
2 parents c0a867c + a9707e5 commit 3638c5a

File tree

131 files changed

+134754
-1495
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

131 files changed

+134754
-1495
lines changed

.gitignore

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,11 @@
11
**/virt*
22
**/__pycache__
33
**/*.sqlite
4-
client/task*
5-
client2
6-
client3
7-
84
*.vpp
9-
client/logs
105
*.sublime-project
116
*.sublime-workspace
127
master/logs
138
pytaskmanager.egg-info
149
.ipynb_checkpoints
10+
.idea/
11+
node_modules

README.md

Lines changed: 42 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,47 @@ Running a node/site requires a (virtual) machine that has:
1616
* Access to a local data store
1717
* Access to the internet and/or central server
1818

19-
## Installation
20-
See the [wiki](https://github.com/IKNL/pytaskmanager/wiki) for detailed instructions on how to install the server and nodes.
19+
Next to this registry, there is a node, which polls a central registry, to check for new tasks to execute. If a new task is available, the referenced container is pulled from the docker hub (e.g. `docker pull <container name>`) irrespective of whether it was pulled before. Afterwards, this container is executed (e.g. `docker run --rm -d <container name>`), with mounts for configuration files and output folders. Furthermore, the *internal* SPARQL endpoint URL is passed to the container as an environment variable (`$SPARQL_ENDPOINT=<url>`).
2120

21+
This means the researcher is free to implement any application/script, embedded in a docker container. Sites running the node are able to limit the docker containers being able to run on their system. It is possible to limit the user/organization which developed the container (e.g. only allowing the regular expression `myConsortium/.*:.*` of container images), or even on the repository level (e.g. `myConsortium/myImage:.*`).
2222

23+
# How to use it?
24+
25+
## Prerequisites
26+
27+
At the (hospital) site:
28+
29+
* A Windows Server 2012R2 (or higher) machine, or a unix machine supporting Docker
30+
* Docker installed, and given rights to the user executing the node rights to perform docker commands
31+
* Python 2.7 or 3 (tested on both)
32+
33+
At the central registry:
34+
35+
* A Windows Server 2012R2 (or higher) machine, or a unix machine supporting Docker
36+
* Docker installed, and given rights to the user executing the node rights to perform docker commands
37+
* Python 2.7 or 3 (tested on both)
38+
39+
## How to run?
40+
41+
At the central registry:
42+
43+
1. Please checkout this repository
44+
2. Run the python script master/TaskMaster.py (`python master/TaskMaster.py`). The registry will now run at port 5000, and the output is shown at the console.
45+
46+
At the (hospital) sites:
47+
48+
1. Checkout this repository
49+
2. Please adapt the config.json file to your site information, including the local URL to your internal SPARQL endpoint.
50+
3. Run the python script node/runScript.py (`python node/runScript.py`)
51+
4. **Optionally**: if you have a public IP address, you can also receive direct files (e.g. usefull if your site is a Trusted Third Party, and (encrypted) files are sent to you). To run this service, please execute the python script node/FileService.py (`python node/FileService.py`).
52+
53+
## How to build and run an algorithm?
54+
55+
The registry is based on REST commands. The docker containers are *only* needed for execution at the sites. As a researcher, this means you have to develop a docker container which can run on *every* site.
56+
57+
To merge results from all sites, and to run the *centralised* part of your analysis, you can develop a script on your own computer. This computer can retrieve the results from the registry, perform its calculations, and (optionally, in an iterative algorithm) post a new request to run an image on the contributing sites. This can also be the same Docker image, using an updated configuration file.
58+
59+
# How to contribute?
60+
If you have any requests, you can fork this repository, develop the addition/change, and send a pull request. If you have a request for a change, please add it to the issue tracker (see "Issues" in the left navigation bar).
61+
62+
This readme and documentation still needs work, as the code for this infrastructure is still work in progress. If you have any question regarding use, please use the issue tracker as well. We might update the readme file accordingly, but also helps us to define where the need for help is.

pytaskmanager/VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
0.1dev1
1+
0.1dev3

pytaskmanager/__init__.py

Lines changed: 1 addition & 221 deletions
Original file line numberDiff line numberDiff line change
@@ -1,228 +1,8 @@
1-
import click
2-
3-
import sys
41
import os
5-
import shutil
6-
import yaml
7-
import logging
8-
9-
# Define version and directories *before* importing submodules
10-
here = os.path.abspath(os.path.dirname(__file__))
112

123
__version__ = ''
13-
4+
here = os.path.abspath(os.path.dirname(__file__))
145
with open(os.path.join(here, 'VERSION')) as fp:
156
__version__ = fp.read()
167

17-
18-
# default parameters for click.Path
19-
pparams = {
20-
'exists': False,
21-
'file_okay': False,
22-
'dir_okay': True,
23-
}
24-
25-
from . import server
26-
from .server import fixtures
27-
from .server import db
28-
from . import client
29-
from . import utest
30-
from . import util
31-
328
APPNAME = 'pytaskmanager'
33-
34-
# ------------------------------------------------------------------------------
35-
# helper functions
36-
# ------------------------------------------------------------------------------
37-
def get_config_location(ctx, config, force_create):
38-
"""Ensure configuration file exists and return its location."""
39-
if config is None:
40-
# Get the location of config.yaml if not provided
41-
filename = ctx.config_file
42-
else:
43-
# Use the config file provided as argument
44-
filename = config
45-
46-
# Check that the config file exists and create it if necessary, but
47-
# only if it was not explicitly provided!
48-
if not os.path.exists(filename):
49-
# We will always create a configuration file at the default location
50-
# when necessary.
51-
if config and not force_create:
52-
click.echo("Configuration file '{}' does not exist and '--force-create' not specified!".format(filename))
53-
click.echo("Aborting ...")
54-
sys.exit(1)
55-
56-
# Make sure the directory exists
57-
dirname = os.path.dirname(filename)
58-
59-
if dirname:
60-
os.makedirs(dirname, exist_ok=True)
61-
62-
# Copy a default config file
63-
if ctx.instance_type == 'server':
64-
skeleton_file = 'server_config_skeleton.yaml'
65-
elif ctx.instance_type == 'client':
66-
skeleton_file = 'client_config_skeleton.yaml'
67-
elif ctx.instance_type == 'unittest':
68-
skeleton_file = 'unittest_config_skeleton.yaml'
69-
70-
src = os.path.join(here, '_data', skeleton_file)
71-
dst = os.path.join(filename)
72-
shutil.copy(src, dst)
73-
74-
if ctx.instance_type == 'server':
75-
with open(dst, 'r') as fp:
76-
cfg = yaml.load(fp)
77-
print('-' * 80)
78-
print(cfg)
79-
print('-' * 80)
80-
81-
cfg['application']['logging']['file'] = ctx.instance_name + '.log'
82-
83-
with open(dst, 'w') as fp:
84-
yaml.dump(cfg, fp)
85-
86-
return filename
87-
88-
89-
@click.group()
90-
def cli():
91-
"""Main entry point for CLI scripts."""
92-
pass
93-
94-
95-
# ------------------------------------------------------------------------------
96-
# ptm test
97-
# ------------------------------------------------------------------------------
98-
@cli.command(name='test')
99-
@click.option('-c', '--config', default=None, type=click.Path(), help='location of the config file')
100-
def cli_test(config):
101-
"""Run unit tests."""
102-
ctx = util.AppContext(APPNAME, 'unittest')
103-
cfg_filename = get_config_location(ctx, config=None, force_create=False)
104-
ctx.init(cfg_filename)
105-
utest.run()
106-
107-
108-
# ------------------------------------------------------------------------------
109-
# ptm server
110-
# ------------------------------------------------------------------------------
111-
@cli.group(name='server')
112-
def cli_server():
113-
"""Subcommand `ptm server`."""
114-
pass
115-
116-
117-
@cli_server.command(name='start')
118-
@click.option('-n', '--name', default='default', help='server instance to use')
119-
@click.option('-c', '--config', default=None, help='filename of config file; overrides --name if provided')
120-
@click.option('-e', '--environment', default='test', help='database environment to use')
121-
@click.option('--ip', default='0.0.0.0', help='ip address to listen on')
122-
@click.option('-p', '--port', default=5000, help='port to listen on')
123-
@click.option('--debug/--no-debug', default=True, help='run server in debug mode (auto-restart)')
124-
@click.option('--force-create', is_flag=True, help='Force creation of config file')
125-
def cli_server_start(name, config, environment, ip, port, debug, force_create):
126-
"""Start the server."""
127-
click.echo("Starting server ...")
128-
ctx = util.ServerContext(APPNAME, 'default')
129-
# Load configuration and initialize logging system
130-
cfg_filename = get_config_location(ctx, config, force_create)
131-
ctx.init(cfg_filename, environment)
132-
133-
# Load the flask.Resources
134-
server.init_resources(ctx)
135-
# Run the server
136-
server.run(ctx, ip, port, debug=debug)
137-
138-
139-
@cli_server.command(name='config_location')
140-
@click.option('-n', '--name', default='default', help='server instance to use')
141-
def cli_server_configlocation(name):
142-
"""Print the location of the default config file."""
143-
# ctx = util.AppContext(APPNAME, 'server', name)
144-
ctx = util.ServerContext(APPNAME, 'default')
145-
cfg_filename = get_config_location(ctx, config=None, force_create=False)
146-
click.echo('{}'.format(cfg_filename))
147-
148-
149-
@cli_server.command(name='passwd')
150-
@click.option('-n', '--name', default='default', help='server instance to use')
151-
@click.option('-c', '--config', default=None, help='filename of config file; overrides --name if provided')
152-
@click.option('-e', '--environment', default='test', help='database environment to use')
153-
@click.option('-p', '--password', prompt='Password', hide_input=True)
154-
def cli_server_passwd(name, config, environment, password):
155-
"""Set the root password."""
156-
log = logging.getLogger('ptm')
157-
158-
ctx = util.AppContext(APPNAME, 'server', name)
159-
160-
# Load configuration and initialize logging system
161-
cfg_filename = get_config_location(ctx, config)
162-
ctx.init(cfg_filename, environment)
163-
164-
uri = ctx.get_database_location()
165-
db.init(uri)
166-
167-
try:
168-
root = db.User.getByUsername('root')
169-
except Exception as e:
170-
log.info("Creating user root")
171-
root = db.User(username='root')
172-
173-
log.info("Setting password for root")
174-
root.set_password(password)
175-
root.save()
176-
177-
log.info("[DONE]")
178-
179-
180-
@cli_server.command(name='load_fixtures')
181-
@click.option('-n', '--name', default='default', help='server instance to use')
182-
@click.option('-e', '--environment', default='test', help='database environment to use')
183-
@click.option('-c', '--config', default=None, help='filename of config file; overrides --name if provided')
184-
def cli_server_load_fixtures(name, environment, config):
185-
"""Load fixtures for testing."""
186-
click.echo("Loading fixtures.")
187-
# ctx = util.AppContext(APPNAME, 'server', name)
188-
ctx = util.ServerContext(APPNAME, 'default')
189-
190-
# Load configuration and initialize logging system
191-
cfg_filename = get_config_location(ctx, config, force_create=False)
192-
ctx.init(cfg_filename, environment)
193-
194-
fixtures.init(ctx)
195-
fixtures.create()
196-
197-
198-
# ------------------------------------------------------------------------------
199-
# ptm client
200-
# ------------------------------------------------------------------------------
201-
@cli.group(name='client')
202-
def cli_client():
203-
"""Subcommand `ptm client`."""
204-
pass
205-
206-
207-
@cli_client.command(name='config_location')
208-
@click.option('-n', '--name', default='default', help='client instance to use')
209-
def cli_server_configlocation(name):
210-
"""Print the location of the default config file."""
211-
ctx = util.AppContext(APPNAME, 'client', name)
212-
cfg_filename = get_config_location(ctx, config=None, force_create=False)
213-
click.echo('{}'.format(cfg_filename))
214-
215-
216-
@cli_client.command(name='start')
217-
@click.option('-n', '--name', default='default', help='client instance to use')
218-
@click.option('-c', '--config', default=None, help='filename of config file; overrides --name if provided')
219-
def cli_client_start(name, config):
220-
"""Start the client."""
221-
ctx = util.AppContext(APPNAME, 'client', name)
222-
223-
# Load configuration and initialize logging system
224-
cfg_filename = get_config_location(ctx, config, force_create=False)
225-
ctx.init(cfg_filename)
226-
227-
# Run the client
228-
client.run(ctx)

pytaskmanager/_data/client_config_skeleton.yaml

Lines changed: 0 additions & 19 deletions
This file was deleted.
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
application:
2+
api_key:
3+
server_url: https://api.distributedlearning.ai
4+
port:
5+
api_path: ''
6+
server_entry_point: /token
7+
delay: 30 # seconds
8+
task_dir: tasks
9+
docker_host: 127.0.0.1
10+
11+
database_uri:
12+
databases:
13+
default:
14+
15+
logging:
16+
level: DEBUG # Can be on of 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
17+
file: node.log # Filename of logfile
18+
use_console: True # Log output to the console?
19+
backup_count: 5 # Number of logs to keep
20+
max_size: 1024 # Specified in kB (i.e. 1024 means a maximum file size of 1MB)
21+
format: "%(asctime)s - %(name)-14s - %(levelname)-8s - %(message)s"
22+
datefmt: "%H:%M:%S"
23+
24+
# environments:
25+
# test:
26+
# prod:
27+
# api_key:
28+
# server_url: https://api.distributedlearning.ai
29+
# api_path: ''
30+
# server_entry_point: /token
31+
# delay: 30 # seconds
32+
# task_dir: tasks
33+
# docker_host: 127.0.0.1
34+
# database_uri:
35+
36+
# logging:
37+
# level: DEBUG # Can be on of 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'
38+
# file: node.log # Filename of logfile
39+
# use_console: True # Log output to the console?
40+
# backup_count: 5 # Number of logs to keep
41+
# max_size: 1024 # Specified in kB (i.e. 1024 means a maximum file size of 1MB)
42+
# format: "%(asctime)s - %(name)-14s - %(levelname)-8s - %(message)s"
43+
# # datefmt: "%Y-%m-%d %H:%M:%S"
44+
# datefmt: "%H:%M:%S"

0 commit comments

Comments
 (0)