-
Notifications
You must be signed in to change notification settings - Fork 8
Deployment Guidelines
So you have managed to deploy your apps and/or services to an OpenRiskNet Virtual Environment (VE), probably by means of creating some templates that allow simple deployment and undeployment. So are you done?
Certainly not! OpenRiskNet needs services to be of production quality, so that third parties will find them easy to deploy, and will satisfy strict security requirements. Think of this as needing to have your services at a state that a security concious pharmaceutical company will be willing to us.
Here are some guidelines that should be handled to get to this state.
We are proposing different levels of compliance for ORN applications:
- Manually Installable: Materials and instructions exist (most likely here) for deploying the application. The process is manual but other users should be able to follow the instructions.
- Good Hygiene: pods do not run as root user, have readiness and liveness probes and user resource requests and limits appropriately and applications have enabled authentication where appropriate.
- Console Installable: Applications are deployable through the web console using either the Template Service Broker or the Ansible Service Broker (note: in future 'Operators' will likely supersede these).
We recommend that all applications aim to get to at least level 2.
Many containers still run as the root user. This is bad as it introduces potential security risks. Better to run as a specific non-privileged user. This is still not ideal as there is potential 'leak through' of user processes between containers.
Best is to allow your container to run as an arbitrarily assigned user. This is the default in OpenShift and means that your container has to be able to run as any user ID and you do not know in advance what that user ID will be.
Sometimes this is not practical, or even possible, so its OK to fall back to run as a dedicated non-privileged user, but this requires the settings in your project to be changed to allow this. Avoid running as the root user unless that is absolutely necessary, and that should hardly ever really be needed.
AIM: your containers can be run without the need to modify the default security settings.
The VE is likely to have an admin
user. It is tempting to use this user because
you can do anything. The problem with this user is that you can do anything!
But please avoid the OpenShift admin
user.
Where possible use admin
to create
a user that is specific to your deployment/project with limited privileges
(See SCC below) and instead of adding the required privilege_ to your new user
ask yourself whether you can avoid the privilege.
Incidentally, a low-privilege developer
user may already be available on the VE.
If so, and you'd prefer not to create your own user, use developer
to create
the project and deploy the application.
OpenShift users create projects and deploy applications but actions
performed by the launched Pods take place using a Servcie Account whose
actions are constrained by a Security Context Constraint (SCC). If
you do not specify a service account in your application a built-in (default
)
account ans SCC is used. This account has limited privileges,
especially with regard to allowing containers to run
that need to be run as root.
Resist the temptation, easy though it is, to adjust the capabilities of the
default
service account. Instead create (using the admin
account) your own account,
which is typically done project-by-project. You can then adjust the privilege
of your project-specific service account without disturbing the system-wide default
.
An OpenShift blog that describes Service Accounts and the related topic of Security Context Constraints (SCCs), which control the actions that a Pod can perform and what it has the ability to access, can be found in the article Understanding Service Accounts and SCCs.
With a service account created in your project you define the Service Account alongside the container definition in your application template. Ideally your template should be parameterised so that a lot of significant information can be provided externally. A typical template definition might look like this (edited for readability): -
parameters:
- name: APP_SA
value: my-servise-Account
objects:
- kind: DeploymentConfig
apiVersion: v1
spec:
template:
spec:
serviceAccountName: ${APP_SA}
The expectation nowadays is that HTTPS should be used for all traffic and that all certificates should be signed by a trusted CA. Use of plain HTTP or self-signed certificates is frowned on.
The ACME Controller tool that is expected to be deployed to any ORN Vhttps://github.com/OpenRiskNet/home/wiki/Annotating-your-service-to-make-it-discoverable VE makes this very simple to achieve. All that is needed is to add this annotation to your route and Let's Encrypt certificates will be generated and automatically renewed for your routes.
metadata:
annotations:
kubernetes.io/tls-acme: "true"
As a guide its best to set this value of this annotation to false
while you are setting things up and then switch
it to true
when you are finished as Let's Encrypt has fairly strict quotas on the number of certificates that
can be generated and its easy to exceed this when testing.
Let users know that your application is available for use. On the current ORN production site this involved adding a link to your app (the public routes) in this landing page.
To do this edit the index.html
in this GitHub repo.
Committing a change to this repo will result in the page automatically being redeployed a few minutes
later.
Also, add your application to the list of services on the main ORN website.
Make your services discoverable by the ORN Service Registry by adding annotations to your Service defintions that the registry will discover when your services start or stop.
This is described in Annotating-your-service-to-make-it-discoverable.
Make sure your pods have health checks. OpenShift uses these to determine whether to attach any designated Service to your Pod and also to determine whether the application behind the service is healthy.
They are extremely valuable and our FAQ on the topic OpenShift Container Probes provides some some helpful guidance.
Use of a liveness probe is beneficial but remember that OpenShift will restart your Pod if this probe fails.
Define limits for CPU and memory for your pods. See here for more details.
This allows K8S to better schedule pods on the cluster and to kill misbehaving pods.
TODO - describe this further.
Add your application as a client
in the OpenRiskNet Realm in Keycloak and setup your application to require
authenticated users. This ensures that users get a good Single Sign On experience and we can effectively monitor how
much the different ORN applications are being used.
See Adding-Authentication for guidelines on how to do this.
An ORN VE provides a number of infrastructure
components. If your application provides these themselves consider
switching to using these so that you can avoid needing to manage them yourself.
The current infrastructure components are:
- PostgreSQL database
- RabbitMQ message queue
- Keycloak for SSO (Red Hat SSO)
If you see you are providing something that could be better handled as an infrastructure component (e.g. a differnt type of database) then work with us to make this happen.
Managers of other VEs will want to deploy your application. Make this easy by adding it to the OpenShift Service Catalog (not to be confused with the OpenRiskNet Registry).
If you have are using a Template to deploy you are probably half way there already and can use the Template Service Broker.
More complex deployments can use Ansible Playbook Bundles with the Ansible Service Broker.
TODO - describe this further.
You should aim to have your application automatically re-deployed when it is updated. There are several approaches, but the 2 most common may be:
- update whenever a new Docker image is pushed to Docker Hub.
- Rebuild and deploy when updated code is pushed to GitHub.
TODO - describe this further.