Skip to content

Commit 7d202b6

Browse files
committed
[v1.16.6] sync components
1 parent 17dc48d commit 7d202b6

File tree

7 files changed

+53
-44
lines changed

7 files changed

+53
-44
lines changed

README.md

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,34 @@
1-
# Managed Kubernetes Hosting
1+
# Kubernetes Clusters
22

3-
Managed Kubernetes clusters are highly available and multi-tenant deployments across availability regions in public or private cloud.
3+
The clusters can be automatically installed on top of Jelastic PaaS in two ways:
44

5-
The clusters can be automatically installed on top of the Jelastic PaaS in two ways:
5+
* **Clean** cluster with pre-deployed HelloWorld example
6+
* **Custom** helm or stack deployment via shell command
67

7-
* **Clean Kubernetes** cluster with two workers and one master node. The number of worker and master nodes can be changed while installation steps or afterwards. All the newly added nodes are automatically connected.
8+
Also, there are different topologies available out-of-box:
9+
* **Development** - one master (1) and one scalable worker (1+)
10+
* **Production** - multi master (3) with API balancers (2+) and scalable workers (2+)
11+
12+
The number of worker nodes can be changed after the initial installation is completed. All the newly added nodes are automatically connected to the cluster.
813

9-
* **Pre-packaged application** within Kubernetes cluster. While installation, there can be provided the URL to a YAML stack manifest. As a result, the cluster will be created already with the needed custom application deployed.
10-
11-
<img src="/images/Managed-Kubernetes-Hosting.png" width="500" alt="Managed Kubernetes Hosting Multi-Cloud" />
14+
<img src="https://jelastic.com/wp-content/themes/salient/img/templates/kubernetes-cloud-services/kube.webp" width="400" alt="Managed Kubernetes Hosting Multi-Cloud" />
1215

1316
## Cloud Native Applications
1417

1518
Easily containerize and migrate existing applications, run hyper scalable microservices and keep them resilient to failures, get extra savings due to more efficient resource utilization, implement CI/CD automation and develop at a higher level of speed in shorter release cycles.
1619

1720
Jelastic PaaS functionality allows to provision the clusters across multiple clouds and on-premises with no vendor lock-in, automatically scale them vertically and horizontally, start from one instance and grow up to thousands, manage the workloads via intuitive UI, as well as automate the DevOps processes with open API and Cloud Scripting.
1821

19-
## Demo and Trial
20-
[Send us request](https://jelastic.com/managed-auto-scalable-clusters-for-business/) to get early access and receive a Kubernetes cluster free trial for a month.
22+
## Related Articles and Materials
23+
* [Managed Kubernetes Hosting with Multi-Cloud Availability](https://jelastic.com/kubernetes-hosting/)
24+
* [Kubernetes Cluster Setup with Automated Scaling and Pay-per-Use Pricing](https://jelastic.com/blog/kubernetes-cluster-scaling-pay-per-use-hosting/)
25+
* [Scaling Kubernetes on Application and Infrastructure Levels](https://jelastic.com/blog/scaling-kubernetes/)
26+
* [Kubernetes Cluster Overview](https://docs.jelastic.com/kubernetes-cluster)
27+
* [Kubernetes Cluster: Package Installation](https://docs.jelastic.com/kubernetes-cluster-installation)
28+
* [Kubernetes Cluster: Versions & Change Logs](https://docs.jelastic.com/kubernetes-cluster-versions)
2129

22-
For large scale projects interested in scalable Kubernetes hosting, Jelastic provides professional services to [assist while migration](https://jelastic.com/managed-auto-scalable-clusters-for-business/).
30+
## Demo and Trial
31+
For testing in public cloud please sign up at one of [Jelastic Cloud Providers with Kubernetes support](https://jelastic.cloud/?featuresSupport=K8S). For testing in multi-cloud, hybrid cloud or private cloud setups please [send us request](https://jelastic.com/managed-auto-scalable-clusters-for-business/).
2332

2433
## Managed Hosting Business
2534

addons/nginx/mandatory.yaml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ spec:
218218
kubernetes.io/os: linux
219219
containers:
220220
- name: nginx-ingress-controller
221-
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.28.0
221+
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
222222
args:
223223
- /nginx-ingress-controller
224224
- --configmap=$(POD_NAMESPACE)/nginx-configuration
@@ -288,8 +288,7 @@ metadata:
288288
app.kubernetes.io/part-of: ingress-nginx
289289
spec:
290290
limits:
291-
- default:
292-
min:
291+
- min:
293292
memory: 90Mi
294293
cpu: 100m
295294
type: Container

addons/upgrade.jps

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -147,12 +147,12 @@ actions:
147147
- cmd[${this}]: |-
148148
service kubelet restart
149149
helm repo update
150-
wget -qO- https://github.com/derailed/k9s/releases/download/v0.13.8/k9s_0.13.8_Linux_x86_64.tar.gz | tar xz -C /usr/bin k9s
151-
wget -qO- https://github.com/derailed/popeye/releases/download/v0.6.2/popeye_0.6.2_Linux_x86_64.tar.gz | tar xz -C /usr/bin popeye
150+
wget -qO- https://github.com/derailed/k9s/releases/download/v0.17.6/k9s_Linux_x86_64.tar.gz | tar xz -C /usr/bin k9s
151+
wget -qO- https://github.com/derailed/popeye/releases/download/v0.7.1/popeye_Linux_x86_64.tar.gz | tar xz -C /usr/bin popeye
152152
wget -nv https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64 -O /usr/bin/stern
153153
chmod +x /usr/bin/stern
154154
/usr/bin/stern --completion=bash > /etc/bash_completion.d/stern.bash
155-
kubectx_version=0.7.1
155+
kubectx_version=0.8.0
156156
wget -qO- https://github.com/ahmetb/kubectx/archive/v${kubectx_version}.tar.gz | tar xz --strip-components=1 -C /usr/bin kubectx-${kubectx_version}/kubectx kubectx-${kubectx_version}/kubens
157157
wget -qO- https://github.com/ahmetb/kubectx/archive/v${kubectx_version}.tar.gz | tar xz --strip-components=2 -C /etc/bash_completion.d kubectx-${kubectx_version}/completion/kubens.bash kubectx-${kubectx_version}/completion/kubectx.bash
158158
kubectl get daemonset weave-net -n kube-system && {

manifest.jps

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ actions:
154154
mkdir -p $HOME/.kube
155155
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
156156
chown root:root $HOME/.kube/config
157-
wget -qO- https://github.com/derailed/k9s/releases/download/v0.17.5/k9s_Linux_x86_64.tar.gz | tar xz -C /usr/bin k9s
157+
wget -qO- https://github.com/derailed/k9s/releases/download/v0.17.6/k9s_Linux_x86_64.tar.gz | tar xz -C /usr/bin k9s
158158
wget -qO- https://github.com/derailed/popeye/releases/download/v0.7.1/popeye_Linux_x86_64.tar.gz | tar xz -C /usr/bin popeye
159159
wget -nv https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64 -O /usr/bin/stern
160160
chmod +x /usr/bin/stern
@@ -278,7 +278,7 @@ actions:
278278

279279
connect-storage:
280280
if (${settings.storage:false}):
281-
cmd[${nodes.k8sm.master.id}]: helm install stable/nfs-client-provisioner --set nfs.server=${nodes.storage.master.address} --set nfs.path=/data --set nfs.mountOptions='{soft,proto=tcp}' --set replicaCount=3 --set storageClass.defaultClass=true --set storageClass.allowVolumeExpansion=true --set storageClass.name=jelastic-dynamic-volume
281+
cmd[${nodes.k8sm.master.id}]: helm install stable/nfs-client-provisioner --name nfs-client-provisioner --set nfs.server=${nodes.storage.master.address} --set nfs.path=/data --set nfs.mountOptions='{soft,proto=tcp}' --set replicaCount=3 --set storageClass.defaultClass=true --set storageClass.allowVolumeExpansion=true --set storageClass.name=jelastic-dynamic-volume
282282

283283
removeWorker:
284284
cmd[${nodes.k8sm.master.id}]: |-
@@ -418,7 +418,7 @@ actions:
418418
sep: ','
419419

420420
install-node-problem-detector:
421-
- cmd[${nodes.k8sm.master.id}]: helm install stable/node-problem-detector --name node-problem-detector --set image.tag=v0.7.0
421+
- cmd[${nodes.k8sm.master.id}]: helm install stable/node-problem-detector --name node-problem-detector
422422

423423
install-metallb:
424424
- cmd[${nodes.k8sm.master.id}]: |-
@@ -565,7 +565,7 @@ addons:
565565
fields:
566566
- type: displayfield
567567
hideLabel: true
568-
markup: This addon provides Kubernetes and GitLab integration. Please select the GitLab environment from the list:
568+
markup: This addon provides Kubernetes and GitLab integration. Please select the GitLab environment from the list.
569569
- type: displayfield
570570
hideLabel: true
571571
- type: envlist

scripts/beforeinit.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ var envsCount = jelastic.env.control.GetEnvs({lazy: true}).infos.length,
1010
nodesPerDevEnvWOStorage = 2,
1111
nodesPerMasterNG = 3,
1212
nodesPerWorkerNG = 2,
13-
maxCloudlets = 6,
13+
maxCloudlets = 16,
1414
markup = "", cur = null, text = "used", prod = true, dev = true, prodStorage = true, devStorage = true, storage = false;
1515

1616
var quotas = jelastic.billing.account.GetQuotas(perEnv + ";"+maxEnvs+";" + perNodeGroup + ";" + maxCloudletsPerRec).array;

scripts/beforeinstall.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ var resp = {
1111
tag: tag,
1212
scalingMode: "stateless",
1313
nodeGroup: "k8sm",
14-
addons: ["conf-k8s-addon", "upgrade-k8s-addon", "gitlab-k8s-addon"],
14+
addons: ["conf-k8s-addon", "upgrade-k8s-addon"],
1515
displayName: "Master",
1616
extip: false,
1717
env: {

scripts/check-install.sh

Lines changed: 23 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -247,32 +247,33 @@ fi
247247
}
248248

249249
checkNginxIngressController() {
250-
readyReplicas=$(kubectl get deployment/"${NGINX_DEPLOYMENT_NAME}" -o=jsonpath='{.status.readyReplicas}' -n ingress-nginx 2> /dev/null)
250+
251+
PODNAME=$(kubectl get pods -l=app.kubernetes.io/name=ingress-nginx -n ingress-nginx -o jsonpath='{.items[0].metadata.name}' 2> /dev/null)
251252
if [ $? -ne 0 ]; then
252-
printError "${INGRESS_CONTROLLER} deployment not found. Check installation logs (CS) and K8s events in ${K8S_EVENTS_LOG_FILE} on a master node"
253-
INGRESS_STATUS="FAIL"
254-
WITH_ERROR="true"
253+
DAEMON_SET=$(kubectl get ds/nginx-ingress-controller -n ingress-nginx > /dev/null)
254+
if [ $? -ne 0 ]; then
255+
printError "Failed to find NGINX pod because of a missing daemon set"
256+
INGRESS_STATUS="FAIL"
257+
WITH_ERROR="true"
258+
else
259+
printError "Failed to find NGINX pod, though NGINX daemon set was found"
260+
INGRESS_STATUS="FAIL"
261+
WITH_ERROR="true"
262+
fi
263+
printError "Check K8s events in ${K8S_EVENTS_LOG_FILE} on a master node"
255264
else
256-
if [ "${readyReplicas}" -lt 1 ]; then
257-
printInfo "${NGINX_DEPLOYMENT_NAME} deployment isn't scaled to 1. Checking pods logs..."
258-
NGINX_POD=$(kubectl get pods -l=app.kubernetes.io/name=ingress-nginx -n ingress-nginx --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
259-
if [ -z $NGINX_POD ]; then
260-
printError "Failed to find ${NGINX_POD} pod"
261-
INGRESS_STATUS="FAIL"
262-
WITH_ERROR="true"
263-
else
264-
printError "${NGINX_DEPLOYMENT_NAME} is not running"
265-
printError "${NGINX_DEPLOYMENT_NAME} pod logs are available in ${K8S_LOG_DIR}/${NGINX_POD}.log"
266-
printError "Inspect K8s events in ${K8S_EVENTS_LOG_FILE}"
267-
kubectl logs ${NGINX_POD} -n ingress-nginx > ${K8S_LOG_DIR}/${NGINX_POD}.log
268-
INGRESS_STATUS="FAIL"
269-
WITH_ERROR="true"
270-
fi
265+
NGINX_POD_STATUS=$(kubectl get pods -l=app.kubernetes.io/name=ingress-nginx -n ingress-nginx -o jsonpath='{.items[0].status.phase}' 2> /dev/null)
266+
if [ "$NGINX_POD_STATUS" != "Running" ]; then
267+
printError "NGINX pod isn't in running state. Current status: $NGINX_POD_STATUS"
268+
kubectl logs ${PODNAME} -n ingress-nginx > ${K8S_LOG_DIR}/${PODNAME}.log
269+
printError "Check logs in ${K8S_LOG_DIR}/${PODNAME}.log"
270+
INGRESS_STATUS="FAIL"
271+
WITH_ERROR="true"
271272
else
272-
printInfo "Ingress controller ${INGRESS_CONTROLLER} is running"
273+
printInfo "NGINX pod ${PODNAME} successfully started"
273274
INGRESS_STATUS="OK"
274275
fi
275-
fi
276+
fi
276277
}
277278

278279
checkHaproxyIngressController() {
@@ -569,7 +570,7 @@ echo -e "
569570
Cluster Health Check Report
570571
571572
[Weave CNI Plugin] : ${WEAVE_STATUS:-"FAIL"}
572-
[Ingres Controller] : ${INGRESS_STATUS:-"FAIL"}
573+
[Ingress Controller] : ${INGRESS_STATUS:-"FAIL"}
573574
[Metrics Server] : ${METRICS_STATUS:-"FAIL"}
574575
[Kubernetes Dashboard] : ${DASHBOARD_STATUS:-"FAIL"}
575576
[Node Problem Detector] : ${NODE_PROBLEM_DETECTOR_STATUS:-"FAIL"}

0 commit comments

Comments
 (0)