Skip to content

Commit e3529d8

Browse files
committed
add impl doc
1 parent 346e1ba commit e3529d8

File tree

2 files changed

+461
-0
lines changed

2 files changed

+461
-0
lines changed
Lines changed: 157 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,157 @@
1+
---
2+
title: Proposal Template
3+
authors:
4+
- "@huiwq1990"
5+
reviewers:
6+
- "@rambohe-ch"
7+
creation-date: 2022-06-11
8+
last-updated: 2022-06-11
9+
status: provisional
10+
---
11+
12+
## OpenYurt应用部署思考
13+
14+
### 部署场景
15+
16+
1)yurt-app-manager需要部署ingress-controller实例到每个nodepool
17+
18+
2)yurt-edgex-manager需要部署edgex实例到每个nodepool
19+
20+
### 当前方案
21+
22+
定义edgex、yurtingress crd,分别实现各自的controller,并协调资源创建。
23+
24+
25+
26+
### 当前问题
27+
28+
1)edge controller、ingress controller存在共性,需要部署实例到各个nodepool;
29+
30+
2)扩展性不够,无法支持更多资源类型,比如:将来要部署边缘网关、支持上层业务时,需要针对性开发新的controller;
31+
32+
33+
34+
### 问题思考
35+
36+
1)如果部署服务只是一个镜像,直接使用`yurtdaemonset`即可;
37+
38+
2)如果部署服务是多个资源,可以将资源封装为chart包,chart本身具备模板配置属性,同时方便部署。Chart部署使用fluxcd,或者argocd等cd系统解决。(备注:fluxcd的HelmRelease本质是执行helm install,它不能解决节点池问题)
39+
40+
通过`spec.chart`指定chart包,`spec.values`设置实例的values。
41+
42+
```yaml
43+
#https://fluxcd.io/docs/components/helm/helmreleases/
44+
apiVersion: helm.toolkit.fluxcd.io/v2beta1
45+
kind: HelmRelease
46+
metadata:
47+
name: backend
48+
namespace: default
49+
spec:
50+
interval: 5m
51+
chart:
52+
spec:
53+
chart: podinfo
54+
version: ">=4.0.0 <5.0.0"
55+
sourceRef:
56+
kind: HelmRepository
57+
name: podinfo
58+
namespace: default
59+
interval: 1m
60+
upgrade:
61+
remediation:
62+
remediateLastFailure: true
63+
test:
64+
enable: true
65+
values:
66+
service:
67+
grpcService: backend
68+
resources:
69+
requests:
70+
cpu: 100m
71+
memory: 64Mi
72+
```
73+
74+
3)进一步多个资源可以看作为应用整体,当前社区已经有OAM的方案,并有kubevela的实现。
75+
76+
- kubevela可以把chart作为应用组件,底层使用fluxcd部署;
77+
- kubevela的topology可以支持多集群部署
78+
79+
https://kubevela.io/docs/tutorials/helm-multi-cluster
80+
81+
```yaml
82+
apiVersion: core.oam.dev/v1beta1
83+
kind: Application
84+
metadata:
85+
name: helm-hello
86+
spec:
87+
components:
88+
- name: hello
89+
type: helm
90+
properties:
91+
repoType: "helm"
92+
url: "https://jhidalgo3.github.io/helm-charts/"
93+
chart: "hello-kubernetes-chart"
94+
version: "3.0.0"
95+
policies:
96+
- name: topology-local
97+
type: topology
98+
properties:
99+
clusters: ["local"]
100+
- name: topology-foo
101+
type: topology
102+
properties:
103+
clusters: ["foo"]
104+
- name: override-local
105+
type: override
106+
properties:
107+
components:
108+
- name: hello
109+
properties:
110+
values:
111+
configs:
112+
MESSAGE: Welcome to Control Plane Cluster!
113+
- name: override-foo
114+
type: override
115+
properties:
116+
components:
117+
- name: hello
118+
properties:
119+
values:
120+
configs:
121+
MESSAGE: Welcome to Your New Foo Cluster!
122+
workflow:
123+
steps:
124+
- name: deploy2local
125+
type: deploy
126+
properties:
127+
policies: ["topology-local", "override-local"]
128+
- name: manual-approval
129+
type: suspend
130+
- name: deploy2foo
131+
type: deploy
132+
properties:
133+
policies: ["topology-foo", "override-foo"]
134+
```
135+
136+
137+
138+
### 结论
139+
140+
基于上面的实现,应用部署到多个nodepool可以类比应用部署到多个集群,即可以参考kubevela的多集群模型,抽象openyurt的nodepool应用部署模型。
141+
142+
143+
144+
### 落地步骤
145+
146+
1)将edgex,ingresscontroller资源封装成chart包,同时helm install后实例间资源不能有重复的;
147+
148+
2)可以基于kubevela开发nodepool特性的Application;或者openyurt对标实现自己的Controller;
149+
150+
151+
152+
### 缺点
153+
154+
1)由于缺少对部署后资源的watch,当某个资源被更新或者删除,controller很难感知;
155+
156+
157+

0 commit comments

Comments
 (0)