Skip to content

Commit 4d53e84

Browse files
andrewsykimrueianspencer-ptroychiutinaxfwu
authored
[release-1.3] v1.3.2 backports (#3264)
* [RayJob][Fix] Use --no-wait for job submission to avoid carrying the error return code to the log tailing (#3216) * [RayJob][Fix] Use --no-wait for job submission to avoid carrying the error return code to the log tailing Signed-off-by: Rueian <[email protected]> * [RayJob][Fix] Use --no-wait for job submission to avoid carrying the error return code to the log tailing Signed-off-by: Rueian <[email protected]> * chore: update comments Signed-off-by: Rueian <[email protected]> * chore: add a comment about bash -e Signed-off-by: Rueian <[email protected]> --------- Signed-off-by: Rueian <[email protected]> * kubectl ray job submit: provide entrypoint (#3186) * [kubectl-plugin] Add head/worker node selector option (#3228) * add node selector option for kubectl plugin create cluster Signed-off-by: Troy Chiu <[email protected]> * nit Signed-off-by: Troy Chiu <[email protected]> --------- Signed-off-by: Troy Chiu <[email protected]> * add node selector option for kubectl plugin create worker group (#3235) * add node selector option for kubectl plugin create work group Signed-off-by: Troy Chiu <[email protected]> * nit Signed-off-by: Troy Chiu <[email protected]> * code review: fix usage Signed-off-by: Troy Chiu <[email protected]> --------- Signed-off-by: Troy Chiu <[email protected]> * [kubectl-plugin] remove CPU limits by default (#3243) Signed-off-by: Andrew Sy Kim <[email protected]> * [Chore][CI] Limit the release-image-build github workflow to only take tag as input (#3117) * remove all inputs from workflow_dispatch Signed-off-by: Tina Wu <[email protected]> * use tag only Signed-off-by: Tina Wu <[email protected]> * align case Signed-off-by: Tina Wu <[email protected]> * change sha Signed-off-by: Tina Wu <[email protected]> * extract tag * lint fix Signed-off-by: Tina Wu <[email protected]> * update github_env Signed-off-by: Tina Wu <[email protected]> * directly take tag Signed-off-by: Tina Wu <[email protected]> * add env, Signed-off-by: Tina Wu <[email protected]> * directly use tag Signed-off-by: Tina Wu <[email protected]> * use env. when in script Signed-off-by: Tina Wu <[email protected]> * env.tag when with Signed-off-by: Tina Wu <[email protected]> * use env.tag for all Signed-off-by: Tina Wu <[email protected]> --------- Signed-off-by: Tina Wu <[email protected]> Co-authored-by: tinaxfwu <[email protected]> * [CI] Remove create tag step from release (#3249) Signed-off-by: Chi-Sheng Liu <[email protected]> --------- Signed-off-by: Rueian <[email protected]> Signed-off-by: Troy Chiu <[email protected]> Signed-off-by: Andrew Sy Kim <[email protected]> Signed-off-by: Tina Wu <[email protected]> Signed-off-by: Chi-Sheng Liu <[email protected]> Co-authored-by: Rueian <[email protected]> Co-authored-by: Spencer Peterson <[email protected]> Co-authored-by: Troy Chiu <[email protected]> Co-authored-by: Tina Wu <[email protected]> Co-authored-by: tinaxfwu <[email protected]> Co-authored-by: Chi-Sheng Liu <[email protected]>
1 parent 0d64b75 commit 4d53e84

File tree

12 files changed

+131
-96
lines changed

12 files changed

+131
-96
lines changed

.github/workflows/image-release.yaml

Lines changed: 27 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,6 @@ name: release-image-build
22

33
on:
44
workflow_dispatch:
5-
inputs:
6-
commit:
7-
description: 'Commit reference (branch or SHA) from which to build the images.'
8-
required: true
9-
tag:
10-
description: 'Desired release version tag (e.g. v1.1.0-rc.1).'
11-
required: true
125

136
jobs:
147
release_apiserver_image:
@@ -18,6 +11,12 @@ jobs:
1811
runs-on: ubuntu-22.04
1912
steps:
2013

14+
- name: Error if not a tag
15+
uses: actions/github-script@v7
16+
if: ${{ ! startsWith(github.ref, 'refs/tags/') }}
17+
with:
18+
script: core.setFailed('This action can only be run on tags')
19+
2120
- name: Set up Go
2221
uses: actions/setup-go@v3
2322
with:
@@ -26,9 +25,13 @@ jobs:
2625
- name: Check out code into the Go module directory
2726
uses: actions/checkout@v2
2827
with:
29-
ref: ${{ github.event.inputs.commit }}
28+
fetch-depth: 0
3029

31-
- name: install kubebuilder
30+
- name: Extract tag
31+
id: tag
32+
run: echo "tag=${GITHUB_REF#refs/tags/}" >> $GITHUB_ENV
33+
34+
- name: Install kubebuilder
3235
run: |
3336
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.0.0/kubebuilder_$(go env GOOS)_$(go env GOARCH)
3437
sudo mv kubebuilder_$(go env GOOS)_$(go env GOARCH) /usr/local/bin/kubebuilder
@@ -68,8 +71,8 @@ jobs:
6871
run: |
6972
docker image tag kuberay/apiserver:${{ steps.vars.outputs.sha_short }} quay.io/kuberay/apiserver:${{ steps.vars.outputs.sha_short }};
7073
docker push quay.io/kuberay/apiserver:${{ steps.vars.outputs.sha_short }};
71-
docker image tag kuberay/apiserver:${{ steps.vars.outputs.sha_short }} quay.io/kuberay/apiserver:${{ github.event.inputs.tag }};
72-
docker push quay.io/kuberay/apiserver:${{ github.event.inputs.tag }}
74+
docker image tag kuberay/apiserver:${{ steps.vars.outputs.sha_short }} quay.io/kuberay/apiserver:${{ env.tag }};
75+
docker push quay.io/kuberay/apiserver:${{ env.tag }}
7376
7477
release_operator_image:
7578
env:
@@ -78,6 +81,12 @@ jobs:
7881
runs-on: ubuntu-22.04
7982
steps:
8083

84+
- name: Error if not a tag
85+
uses: actions/github-script@v7
86+
if: ${{ ! startsWith(github.ref, 'refs/tags/') }}
87+
with:
88+
script: core.setFailed('This action can only be run on tags')
89+
8190
- name: Set up Go
8291
uses: actions/setup-go@v3
8392
with:
@@ -86,9 +95,13 @@ jobs:
8695
- name: Check out code into the Go module directory
8796
uses: actions/checkout@v2
8897
with:
89-
ref: ${{ github.event.inputs.commit }}
98+
fetch-depth: 0
99+
100+
- name: Extract tag
101+
id: tag
102+
run: echo "tag=${GITHUB_REF#refs/tags/}" >> $GITHUB_ENV
90103

91-
- name: install kubebuilder
104+
- name: Install kubebuilder
92105
run: |
93106
wget https://github.com/kubernetes-sigs/kubebuilder/releases/download/v3.0.0/kubebuilder_$(go env GOOS)_$(go env GOARCH)
94107
sudo mv kubebuilder_$(go env GOOS)_$(go env GOARCH) /usr/local/bin/kubebuilder
@@ -160,15 +173,4 @@ jobs:
160173
provenance: false
161174
tags: |
162175
quay.io/${{env.REPO_ORG}}/${{env.REPO_NAME}}:${{ steps.vars.outputs.sha_short }}
163-
quay.io/${{env.REPO_ORG}}/${{env.REPO_NAME}}:${{ github.event.inputs.tag }}
164-
165-
- name: Create tag
166-
uses: actions/github-script@v6
167-
with:
168-
script: |
169-
await github.rest.git.createRef({
170-
owner: context.repo.owner,
171-
repo: context.repo.repo,
172-
ref: 'refs/tags/ray-operator/${{ github.event.inputs.tag }}',
173-
sha: '${{ github.event.inputs.commit }}'
174-
})
176+
quay.io/${{env.REPO_ORG}}/${{env.REPO_NAME}}:${{ env.tag }}

kubectl-plugin/pkg/cmd/create/create_cluster.go

Lines changed: 28 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -16,19 +16,21 @@ import (
1616
)
1717

1818
type CreateClusterOptions struct {
19-
configFlags *genericclioptions.ConfigFlags
20-
ioStreams *genericclioptions.IOStreams
21-
clusterName string
22-
rayVersion string
23-
image string
24-
headCPU string
25-
headMemory string
26-
headGPU string
27-
workerCPU string
28-
workerMemory string
29-
workerGPU string
30-
workerReplicas int32
31-
dryRun bool
19+
configFlags *genericclioptions.ConfigFlags
20+
ioStreams *genericclioptions.IOStreams
21+
headNodeSelectors map[string]string
22+
workerNodeSelectors map[string]string
23+
clusterName string
24+
rayVersion string
25+
image string
26+
headCPU string
27+
headMemory string
28+
headGPU string
29+
workerCPU string
30+
workerMemory string
31+
workerGPU string
32+
workerReplicas int32
33+
dryRun bool
3234
}
3335

3436
var (
@@ -83,6 +85,8 @@ func NewCreateClusterCommand(streams genericclioptions.IOStreams) *cobra.Command
8385
cmd.Flags().StringVar(&options.workerMemory, "worker-memory", "4Gi", "amount of memory in each worker group replica")
8486
cmd.Flags().StringVar(&options.workerGPU, "worker-gpu", "0", "number of GPUs in each worker group replica")
8587
cmd.Flags().BoolVar(&options.dryRun, "dry-run", false, "print the generated YAML instead of creating the cluster")
88+
cmd.Flags().StringToStringVar(&options.headNodeSelectors, "head-node-selectors", nil, "Node selectors to apply to all head pods in the cluster (e.g. --head-node-selectors cloud.google.com/gke-accelerator=nvidia-l4,cloud.google.com/gke-nodepool=my-node-pool)")
89+
cmd.Flags().StringToStringVar(&options.workerNodeSelectors, "worker-node-selectors", nil, "Node selectors to apply to all worker pods in the cluster (e.g. --worker-node-selectors cloud.google.com/gke-accelerator=nvidia-l4,cloud.google.com/gke-nodepool=my-node-pool)")
8690

8791
options.configFlags.AddFlags(cmd.Flags())
8892
return cmd
@@ -128,15 +132,17 @@ func (options *CreateClusterOptions) Run(ctx context.Context, factory cmdutil.Fa
128132
Namespace: *options.configFlags.Namespace,
129133
ClusterName: options.clusterName,
130134
RayClusterSpecObject: generation.RayClusterSpecObject{
131-
RayVersion: options.rayVersion,
132-
Image: options.image,
133-
HeadCPU: options.headCPU,
134-
HeadMemory: options.headMemory,
135-
HeadGPU: options.headGPU,
136-
WorkerReplicas: options.workerReplicas,
137-
WorkerCPU: options.workerCPU,
138-
WorkerMemory: options.workerMemory,
139-
WorkerGPU: options.workerGPU,
135+
RayVersion: options.rayVersion,
136+
Image: options.image,
137+
HeadCPU: options.headCPU,
138+
HeadMemory: options.headMemory,
139+
HeadGPU: options.headGPU,
140+
WorkerReplicas: options.workerReplicas,
141+
WorkerCPU: options.workerCPU,
142+
WorkerMemory: options.workerMemory,
143+
WorkerGPU: options.workerGPU,
144+
HeadNodeSelectors: options.headNodeSelectors,
145+
WorkerNodeSelectors: options.workerNodeSelectors,
140146
},
141147
}
142148

kubectl-plugin/pkg/cmd/create/create_workergroup.go

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -25,18 +25,19 @@ const (
2525
)
2626

2727
type CreateWorkerGroupOptions struct {
28-
configFlags *genericclioptions.ConfigFlags
29-
ioStreams *genericclioptions.IOStreams
30-
clusterName string
31-
groupName string
32-
rayVersion string
33-
image string
34-
workerCPU string
35-
workerGPU string
36-
workerMemory string
37-
workerReplicas int32
38-
workerMinReplicas int32
39-
workerMaxReplicas int32
28+
configFlags *genericclioptions.ConfigFlags
29+
ioStreams *genericclioptions.IOStreams
30+
workerNodeSelectors map[string]string
31+
clusterName string
32+
groupName string
33+
rayVersion string
34+
image string
35+
workerCPU string
36+
workerGPU string
37+
workerMemory string
38+
workerReplicas int32
39+
workerMinReplicas int32
40+
workerMaxReplicas int32
4041
}
4142

4243
var (
@@ -93,6 +94,7 @@ func NewCreateWorkerGroupCommand(streams genericclioptions.IOStreams) *cobra.Com
9394
cmd.Flags().StringVar(&options.workerCPU, "worker-cpu", "2", "number of CPUs in each replica")
9495
cmd.Flags().StringVar(&options.workerGPU, "worker-gpu", "0", "number of GPUs in each replica")
9596
cmd.Flags().StringVar(&options.workerMemory, "worker-memory", "4Gi", "amount of memory in each replica")
97+
cmd.Flags().StringToStringVar(&options.workerNodeSelectors, "worker-node-selectors", nil, "Node selectors to apply to all worker pods in this worker group (e.g. --worker-node-selectors cloud.google.com/gke-accelerator=nvidia-l4,cloud.google.com/gke-nodepool=my-node-pool)")
9698

9799
options.configFlags.AddFlags(cmd.Flags())
98100
return cmd
@@ -170,6 +172,7 @@ func createWorkerGroupSpec(options *CreateWorkerGroupOptions) rayv1.WorkerGroupS
170172
},
171173
},
172174
},
175+
NodeSelector: options.workerNodeSelectors,
173176
},
174177
}
175178

kubectl-plugin/pkg/cmd/create/create_workergroup_test.go

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,9 @@ func TestCreateWorkerGroupSpec(t *testing.T) {
2121
workerCPU: "2",
2222
workerMemory: "5Gi",
2323
workerGPU: "1",
24+
workerNodeSelectors: map[string]string{
25+
"worker-node-selector": "worker-node-selector-value",
26+
},
2427
}
2528

2629
expected := rayv1.WorkerGroupSpec{
@@ -46,6 +49,9 @@ func TestCreateWorkerGroupSpec(t *testing.T) {
4649
},
4750
},
4851
},
52+
NodeSelector: map[string]string{
53+
"worker-node-selector": "worker-node-selector-value",
54+
},
4955
},
5056
},
5157
Replicas: ptr.To[int32](3),

kubectl-plugin/pkg/cmd/job/job_submit.go

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -260,6 +260,11 @@ func (options *SubmitJobOptions) Run(ctx context.Context, factory cmdutil.Factor
260260
RayJobName: options.rayjobName,
261261
Namespace: *options.configFlags.Namespace,
262262
SubmissionMode: "InteractiveMode",
263+
// Prior to kuberay 1.2.2, the entry point is required. To maintain
264+
// backwards compatibility with 1.2.x, we submit the entry point
265+
// here, even though it will be ignored.
266+
// See https://github.com/ray-project/kuberay/issues/3126.
267+
Entrypoint: options.entryPoint,
263268
RayClusterSpecObject: generation.RayClusterSpecObject{
264269
RayVersion: options.rayVersion,
265270
Image: options.image,

kubectl-plugin/pkg/util/generation/generation.go

Lines changed: 15 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -17,15 +17,17 @@ const (
1717
)
1818

1919
type RayClusterSpecObject struct {
20-
RayVersion string
21-
Image string
22-
HeadCPU string
23-
HeadGPU string
24-
HeadMemory string
25-
WorkerCPU string
26-
WorkerGPU string
27-
WorkerMemory string
28-
WorkerReplicas int32
20+
HeadNodeSelectors map[string]string
21+
WorkerNodeSelectors map[string]string
22+
RayVersion string
23+
Image string
24+
HeadCPU string
25+
HeadGPU string
26+
HeadMemory string
27+
WorkerCPU string
28+
WorkerGPU string
29+
WorkerMemory string
30+
WorkerReplicas int32
2931
}
3032

3133
type RayClusterYamlObject struct {
@@ -38,6 +40,7 @@ type RayJobYamlObject struct {
3840
RayJobName string
3941
Namespace string
4042
SubmissionMode string
43+
Entrypoint string
4144
RayClusterSpecObject
4245
}
4346

@@ -52,6 +55,7 @@ func (rayJobObject *RayJobYamlObject) GenerateRayJobApplyConfig() *rayv1ac.RayJo
5255
rayJobApplyConfig := rayv1ac.RayJob(rayJobObject.RayJobName, rayJobObject.Namespace).
5356
WithSpec(rayv1ac.RayJobSpec().
5457
WithSubmissionMode(rayv1.JobSubmissionMode(rayJobObject.SubmissionMode)).
58+
WithEntrypoint(rayJobObject.Entrypoint).
5559
WithRayClusterSpec(rayJobObject.generateRayClusterSpec()))
5660

5761
return rayJobApplyConfig
@@ -67,6 +71,7 @@ func (rayClusterSpecObject *RayClusterSpecObject) generateRayClusterSpec() *rayv
6771
WithRayStartParams(map[string]string{"dashboard-host": "0.0.0.0"}).
6872
WithTemplate(corev1ac.PodTemplateSpec().
6973
WithSpec(corev1ac.PodSpec().
74+
WithNodeSelector(rayClusterSpecObject.HeadNodeSelectors).
7075
WithContainers(corev1ac.Container().
7176
WithName("ray-head").
7277
WithImage(rayClusterSpecObject.Image).
@@ -76,7 +81,6 @@ func (rayClusterSpecObject *RayClusterSpecObject) generateRayClusterSpec() *rayv
7681
corev1.ResourceMemory: resource.MustParse(rayClusterSpecObject.HeadMemory),
7782
}).
7883
WithLimits(corev1.ResourceList{
79-
corev1.ResourceCPU: resource.MustParse(rayClusterSpecObject.HeadCPU),
8084
corev1.ResourceMemory: resource.MustParse(rayClusterSpecObject.HeadMemory),
8185
})).
8286
WithPorts(corev1ac.ContainerPort().WithContainerPort(6379).WithName("gcs-server"),
@@ -88,6 +92,7 @@ func (rayClusterSpecObject *RayClusterSpecObject) generateRayClusterSpec() *rayv
8892
WithReplicas(rayClusterSpecObject.WorkerReplicas).
8993
WithTemplate(corev1ac.PodTemplateSpec().
9094
WithSpec(corev1ac.PodSpec().
95+
WithNodeSelector(rayClusterSpecObject.WorkerNodeSelectors).
9196
WithContainers(corev1ac.Container().
9297
WithName("ray-worker").
9398
WithImage(rayClusterSpecObject.Image).
@@ -97,7 +102,6 @@ func (rayClusterSpecObject *RayClusterSpecObject) generateRayClusterSpec() *rayv
97102
corev1.ResourceMemory: resource.MustParse(rayClusterSpecObject.WorkerMemory),
98103
}).
99104
WithLimits(corev1.ResourceList{
100-
corev1.ResourceCPU: resource.MustParse(rayClusterSpecObject.WorkerCPU),
101105
corev1.ResourceMemory: resource.MustParse(rayClusterSpecObject.WorkerMemory),
102106
}))))))
103107

kubectl-plugin/pkg/util/generation/generation_test.go

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,12 @@ func TestGenerateRayCluterApplyConfig(t *testing.T) {
2929
WorkerCPU: "2",
3030
WorkerMemory: "10Gi",
3131
WorkerGPU: "1",
32+
HeadNodeSelectors: map[string]string{
33+
"head-selector1": "foo",
34+
},
35+
WorkerNodeSelectors: map[string]string{
36+
"worker-selector1": "baz",
37+
},
3238
},
3339
}
3440

@@ -46,6 +52,8 @@ func TestGenerateRayCluterApplyConfig(t *testing.T) {
4652
assert.Equal(t, resource.MustParse(testRayClusterYamlObject.WorkerCPU), *result.Spec.WorkerGroupSpecs[0].Template.Spec.Containers[0].Resources.Requests.Cpu())
4753
assert.Equal(t, resource.MustParse(testRayClusterYamlObject.WorkerGPU), *result.Spec.WorkerGroupSpecs[0].Template.Spec.Containers[0].Resources.Requests.Name(corev1.ResourceName("nvidia.com/gpu"), resource.DecimalSI))
4854
assert.Equal(t, resource.MustParse(testRayClusterYamlObject.WorkerMemory), *result.Spec.WorkerGroupSpecs[0].Template.Spec.Containers[0].Resources.Requests.Memory())
55+
assert.Equal(t, testRayClusterYamlObject.HeadNodeSelectors, result.Spec.HeadGroupSpec.Template.Spec.NodeSelector)
56+
assert.Equal(t, testRayClusterYamlObject.WorkerNodeSelectors, result.Spec.WorkerGroupSpecs[0].Template.Spec.NodeSelector)
4957
}
5058

5159
func TestGenerateRayJobApplyConfig(t *testing.T) {
@@ -125,7 +133,6 @@ spec:
125133
name: client
126134
resources:
127135
limits:
128-
cpu: "1"
129136
memory: 5Gi
130137
nvidia.com/gpu: "1"
131138
requests:
@@ -145,7 +152,6 @@ spec:
145152
name: ray-worker
146153
resources:
147154
limits:
148-
cpu: "2"
149155
memory: 10Gi
150156
requests:
151157
cpu: "2"

ray-operator/controllers/ray/common/job.go

Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -72,18 +72,15 @@ func GetK8sJobCommand(rayJobInstance *rayv1.RayJob) ([]string, error) {
7272
// `ray job submit` alone doesn't handle duplicated submission gracefully. See https://github.com/ray-project/kuberay/issues/2154.
7373
// In order to deal with that, we use `ray job status` first to check if the jobId has been submitted.
7474
// If the jobId has been submitted, we use `ray job logs` to follow the logs.
75-
// Otherwise, we submit the job normally with `ray job submit`. The full shell command looks like this:
76-
// if ray job status --address http://$RAY_ADDRESS $RAY_JOB_SUBMISSION_ID >/dev/null 2>&1 ;
77-
// then ray job logs --address http://RAY_ADDRESS --follow $RAY_JOB_SUBMISSION_ID ;
78-
// else ray job submit --address http://RAY_ADDRESS --submission-id $RAY_JOB_SUBMISSION_ID -- ... ;
79-
// fi
75+
// Otherwise, we submit the job with `ray job submit --no-wait` + `ray job logs`. The full shell command looks like this:
76+
// if ! ray job status --address http://$RAY_ADDRESS $RAY_JOB_SUBMISSION_ID >/dev/null 2>&1 ;
77+
// then ray job submit --address http://$RAY_ADDRESS --submission-id $RAY_JOB_SUBMISSION_ID --no-wait -- ... ;
78+
// fi ; ray job logs --address http://$RAY_ADDRESS --follow $RAY_JOB_SUBMISSION_ID
8079
jobStatusCommand := []string{"ray", "job", "status", "--address", address, jobId, ">/dev/null", "2>&1"}
8180
jobFollowCommand := []string{"ray", "job", "logs", "--address", address, "--follow", jobId}
82-
jobSubmitCommand := []string{"ray", "job", "submit", "--address", address}
83-
k8sJobCommand := append([]string{"if"}, jobStatusCommand...)
81+
jobSubmitCommand := []string{"ray", "job", "submit", "--address", address, "--no-wait"}
82+
k8sJobCommand := append([]string{"if", "!"}, jobStatusCommand...)
8483
k8sJobCommand = append(k8sJobCommand, ";", "then")
85-
k8sJobCommand = append(k8sJobCommand, jobFollowCommand...)
86-
k8sJobCommand = append(k8sJobCommand, ";", "else")
8784
k8sJobCommand = append(k8sJobCommand, jobSubmitCommand...)
8885

8986
runtimeEnvJson, err := getRuntimeEnvJson(rayJobInstance)
@@ -119,7 +116,8 @@ func GetK8sJobCommand(rayJobInstance *rayv1.RayJob) ([]string, error) {
119116
}
120117

121118
// "--" is used to separate the entrypoint from the Ray Job CLI command and its arguments.
122-
k8sJobCommand = append(k8sJobCommand, "--", entrypoint, ";", "fi")
119+
k8sJobCommand = append(k8sJobCommand, "--", entrypoint, ";", "fi", ";")
120+
k8sJobCommand = append(k8sJobCommand, jobFollowCommand...)
123121

124122
return k8sJobCommand, nil
125123
}

0 commit comments

Comments
 (0)