This project is a ActiveMQ Artemis Self Provisioning Plugin to the Administrator
perspective in OpenShift console. It requires at least OpenShift 4.18 to use
and is compatible with OpenShift 4.19.
To be able to run the local development environment you need to:
- have access to a local or remote OpenShift cluster
- have the operator installed on the cluster
- have the cert-manager operator installed on the cluster
- have the plugin running
- have the console running
In order to run the project you need to have access to an OpenShift cluster.
If you don't have an access to a remote one you can deploy one on your machine
with crc.
Follow the documentation: https://access.redhat.com/documentation/en-us/red_hat_openshift_local/2.34/html-single/getting_started_guide/index#introducing
Warning
If you're encountering an issue where crc gets stuck in the step Waiting for kube-apiserver availability or Waiting until the user's pull secret is written to the instance disk... you might
need to
configure the network as local: crc config set network-mode user
Once your environment is set up you simply need to crc start your cluster.
Depending on the remote or local env:
oc login -u kubeadmin https://api.ci-ln-x671mxk-76ef8.origin-ci-int-aws.dev.rhcloud.com:6443(to adapt depending on your cluster address)oc login -u kubeadmin https://api.crc.testing:6443
The plugin requires having access to the operator to function. You can either get the operator from the operatorHub or from the upstream repo.
Navigate to the operatorHub on the console and search for: `Red Hat Integration
- AMQ Broker for RHEL 8 (Multiarch)` After installation the wait for the operator container to be up and running.
Warning
If you're running into an issue where the operatorHub is not accessible, try
to force its redeployment: oc delete pods --all -n openshift-marketplace
see crc-org/crc#4109 for reference.
Clone the operator repository then run ./deploy/install_opr.sh to install the
operator onto your cluster.
git clone git@github.com:arkmq-org/activemq-artemis-operator.git
cd activemq-artemis-operator
./deploy/install_opr.sh
Tip
If you need to redeploy the operator, first call ./deploy/undeploy_all.sh
Important
The script install_opr.sh will try to deploy on OpenShift with the oc
command. If it's not available it will fallback to kubectl. Make sure your
OpenShift cluster is up and running and that oc is connected to it before
running the install.
The plugin requires having access to the cert-manager operator for certain of its functionalities.
Navigate to the operatorHub on the console and search for Cert-manager.
Apply this:
oc apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: my-selfsigned-ca
namespace: cert-manager
spec:
isCA: true
commonName: my-selfsigned-ca
secretName: root-secret
privateKey:
algorithm: ECDSA
size: 256
issuerRef:
name: selfsigned-issuer
kind: ClusterIssuer
group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: my-ca-issuer
spec:
ca:
secretName: root-secret
EOF- For HTTP
cd bridge-auth-http
./setup.sh
- For HTTPS
cd bridge-auth-https
./setup.sh
In one terminal window, run:
yarn installyarn start
Note: yarn run start starts the plugin in http mode.
if you want the plugin to run in https mode, run
yarn run start-tls
In another terminal window, run:
oc loginyarn run start-console(requires Docker or podman or another Open Containers Initiative compatible container runtime)
This will run the OpenShift console in a container connected to the cluster you've logged into. The plugin HTTP server runs on port 9001 with CORS enabled. Navigate to http://localhost:9000 to see the running plugin.
To view our plugin on OpenShift, navigate to the Workloads section. The plugin will be listed as Brokers.
If you want the console to run in https mode, run:
yarn run start-console-tls
This command will run the console in https mode on port 9442.
The console url is https://localhost:9442
Note: Running console in https mode requires the plugin running in https mode too.
The console in https mode requires a private key and a server certificate that are generated
with openssl command. They are located under console-cert directory. The domain.key is the
private key and domain.crt is the server certificate. Please read the console-cert/readme
for instructions on how they are generated.
To run the console in https mode, you need to mount the private key and server cert to the
docker container and pass the locations to the console using BRIDGE_TLS_CERT_FILE and
BRIDGE_TLS_KEY_FILE environment variables respectively. Please see the start-console-tls.sh
for details.
- Build the image:
docker build -t quay.io/arkmq-org/activemq-artemis-self-provisioning-plugin:latest . - Run the image:
docker run -it --rm -d -p 9001:80 quay.io/arkmq-org/activemq-artemis-self-provisioning-plugin:latest
- Push the image to image registry:
docker push quay.io/arkmq-org/activemq-artemis-self-provisioning-plugin:latest
You can deploy the plugin to a cluster by running this following command:
./deploy-plugin.sh [-i <image> -n]Without any arguments, the plugin will run in https mode on port 9443.
The optional -i <image> (or --image <image>) argument allows you to pass in the plugin image. If not specified the default
quay.io/arkmq-org/activemq-artemis-self-provisioning-plugin:latest is deployed. for example:
./deploy-plugin.sh -i quay.io/<repo-username>/activemq-artemis-self-provisioning-plugin:1.0.1The optional -n (or --nossl) argument disables the https and makes the plugin run in http mode on port 9001.
For example:
./deploy-plugin.sh -nThe deploy-plugin.sh uses oc kustomize (built-in kustomize) command to configure and deploy the plugin using
resources and patches defined under ./deploy directory.
To undeploy the plugin, run
./undeploy-plugin.shIf you want to have a broker that is able to perform a token review, you will need to have access to a service account with enough rights. To create one, execute the following YAML on your environment:
apiVersion: v1
kind: ServiceAccount
metadata:
name: ex-aao-sa
namespace: default
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ex-aao-sa-crb
subjects:
- kind: ServiceAccount
name: ex-aao-sa
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: 'system:auth-delegator'Important:
- The service account must reside in the same namespace as the broker you want to deploy.
- The role binding to 'system:auth-delegator' has to be cluster wide otherwise the broker won't be allowed to perform token reviews.
While we wait for the 7.13 broker to get available, any broker that intends to
perform a token review should have the following env in its spec:
env:
- name: KUBERNETES_CA_PATH
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: KUBERNETES_SERVICE_HOST
value: 'api.crc.testing'
- name: KUBERNETES_SERVICE_PORT
value: '6443'Assuming you have the service account ex-aao-sa available in the same
namespace as the broker you want to deploy and that you have created with the UI
a custom jaas config allowing your username to have admin access to the broker,
your YAML should look like this.
apiVersion: broker.amq.io/v1beta1
kind: ActiveMQArtemis
metadata:
name: ex-aao
namespace: default
spec:
env:
- name: KUBERNETES_CA_PATH
value: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- name: KUBERNETES_SERVICE_HOST
value: 'api.crc.testing'
- name: KUBERNETES_SERVICE_PORT
value: '6443'
ingressDomain: apps-crc.testing
console:
expose: true
deploymentPlan:
image: placeholder
requireLogin: false
size: 1
podSecurity:
serviceAccountName: ex-aao-sa
extraMounts:
secrets:
- custom-jaas-config