I'm struggling to figure out how to properly order jobs, and keep one from running until the others are complete. #8846
Replies: 1 comment
-
|
I've decided to write a resource that will pause a pipeline, then query Concourse to check for running jobs in the pipeline, wait for them, and continue when they are complete. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We have this issue that's been plaguing us for a while. We have these instance pipelines, which are generated by a metadata file in S3, where the
destroystep cleans up the metadata causing the pipeline to become archived. Our pipeline starts when triggering by an S3 file tobuildimages based on our codebase. Once the build is complete, thedeploywill deploy a helm chart with those images, and finally thedestroystep will kick off based on another S3 file. The flow is pretty simple.This doesn't work well since developers tend to push multiple commits in succession, then close the PR fairly quickly. We run into resources that aren't being destroyed, due to the deploy running after the destroy is finished.
I have attempted to mitigate it with the
concourse-pool-resource, but it doesn't seem to have the features we need. Since these are instance pipelines, and are short lived, we create a claimed lock in the build step, thenunclaimthe lock in the deploy step. This worked well when there was only one build required for a pull request. The destroy pipeline would wait for an unclaimed lock. The problem arises when there are multiple builds, you cannot create a claimed lock when an unclaimed lock exists (which makes sense). So the pipeline was modified totry: remove: lockbefore creating the claimed lock.So the pipeline basically looks like this now:
More frequent successful cleanups in this case, since the destroy cannot run while the deploy is running, but we still have the occasional issue where build is running while deploy is running, and the deploy will
unclaimthe wrong builds lock (since we are basically atomically creating it on each build run).I thought about using a serial group on the
buildanddeploysteps, but I fear the destroy will run while there is a build job queued up waiting for the deploy to finish, putting me in the same scenario.I can't figure out either:
a) Make the jobs wait for other jobs to completely finish
b) Cancel in-progress jobs from inside a pipeline
Has anyone faced a similar issue like this, or maybe has a decent solution / work around?
Beta Was this translation helpful? Give feedback.
All reactions