Skip to content

Commit 398b9af

Browse files
authored
Merge pull request #2269 from diggerhq/docs/pr-level-locks-fix
docs/pr level locks fix
2 parents 6c6c657 + cb75c18 commit 398b9af

File tree

6 files changed

+43
-14
lines changed

6 files changed

+43
-14
lines changed

docs/ce/cloud-providers/aws.mdx

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: "Setting up DynamoDB Access for locks"
3-
description: "Digger runs without a backend but uses a DynamoDB table to keep track of all the locks that are necessary for locking PR projects. On the first run in your AWS account digger checks for the presence of `DiggerDynamoDBLockTable` and it requires the following policy for the DynamoDB access:"
3+
description: "Digger runs without a backend but uses a DynamoDB table to keep track of all the locks that are necessary for locking PR projects. On the first run in your AWS account digger checks for the presence of `DiggerDynamoDBLockTable`. If the dynamoDB table with that name is not present it will automaticlaly create it.
4+
It requires the following policy for the DynamoDB access:"
45
---
56

67
```

docs/ce/features/plan-persistence.mdx

Lines changed: 0 additions & 5 deletions
This file was deleted.

docs/ce/features/pr-level-locks.mdx

Lines changed: 23 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,30 @@
22
title: "PR-level locks"
33
---
44

5-
* For every pull request we perform a lock when the pull request is opened and unlocked when the pull request is merged, this is to avoid making a plan preview stale
5+
For every pull request we perform a lock when the pull request is opened and unlocked when the pull request is merged, this is to avoid making a single apply override another apply in a different PR.
6+
Since digger is primarily being used to apply while the PR is open this locking guarantees that no two PRs can wipe off eachother's changes due to human errors.
7+
When digger is using with a backend the locks are stored in the database directly in a table called digger_locks. No further configuration is needed.
68

7-
* For GCP locking is performed using buckets that are strongly consistent: [https://github.com/diggerhq/digger/blob/80289922227f225d887feb74749b4daef8b441f8/pkg/gcp/gcp\_lock.go#L13](https://github.com/diggerhq/digger/blob/80289922227f225d887feb74749b4daef8b441f8/pkg/gcp/gcp%5Flock.go#L13)
9+
## Disabling PR-level locks
810

9-
* These options are configured and the locking can be disabled entirely if it is not needed
11+
In order to disable locking repo wide you can add a top-level flag to your digger.yml:
1012

11-
* The locking interface is very simple and is based on `Lock()` and `Unlock()` Operations [https://github.com/diggerhq/digger/blob/5815775095d7380281c71c7c3aa63ca1b374365f/pkg/locking/locking.go#L40](https://github.com/diggerhq/digger/blob/5815775095d7380281c71c7c3aa63ca1b374365f/pkg/locking/locking.go#L40)
13+
```
14+
pr_locks: false
1215
13-
* A pull request acquires a lock for every project impacted by this PR and all dependant projects
16+
projects:
17+
- name: dev
18+
dir: dev/
19+
```
20+
21+
## Backendless mode
22+
23+
When using digger in backendless mode there is no backend or DB to store the locks information. In this case we have implemented integrations with several
24+
cloud provider resources to store the state of PR locks. The table below summarises the different locking methods available when using backendless mode:
25+
26+
27+
| Cloud Provider | Resource Type | Configuration details Link |
28+
|----------------|-----------------|------|
29+
| AWS | DynamoDB | [here](/ce/cloud-providers/aws) |
30+
| GCP | GCP Bucket | [here](/ce/gcp/using-gcp-bucket-for-locks) |
31+
| Azure | Storage Tables | [here](/ce/azure-specific/azure-devops-locking-connection-methods) |

docs/ce/howto/custom-commands.mdx

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,11 @@ The value of `$DIGGER_OUT` defaults to `$RUNNER_TEMP/digger-out.log`; you can ch
4040

4141
## Overriding plan commands
4242

43+
<Note>
44+
This is an advanced usecase if you want to specify an entirely custom command during plan and apply phases. If you only want to use
45+
plan artefacts the easiest way would be to simply configure [plan artefacts persistence](/ce/howto/plan-artefacts) and you would not need to perform
46+
a complete override of the plan and apply commands.
47+
</Note>
4348
You can add extra arguments to the plan command by setting the `extra_args` key in the `steps` section of the `plan` command.
4449

4550
However in some cases if you wish to override the plan command entirely you can do it by excluding the plan in the steps and having your command specified in the run like so:

docs/ce/howto/store-plans-in-a-bucket.mdx renamed to docs/ce/howto/plan-artefacts.mdx

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,25 @@
11
---
2-
title: "Store plans in a Bucket"
2+
title: "Plan artefacts"
33
---
44

5+
Digger can be used to store plan artefacts during the plan phase and make them available for use during the apply phase. Once plan artefacts is configured and uploaded during the plan phase
6+
it will automatically be used during the apply phase. Once one of the below artefacts is configured successfully there is no additional configuration needed on your end to achieve apply-time artefact reuse.
7+
58
### Github
69
Digger can use Github Artifacts to store `terraform plan` outputs. In order to enable it you can set the following argument in digger_workflow.yml:
710

811
```
912
upload-plan-destination: github
1013
```
1114

15+
<Warning>
16+
Github plan artefacts as a destination is currently deprecated due to a change in Github API. For more information see [this issue](https://github.com/diggerhq/digger/issues/1702).
17+
</Warning>
18+
19+
<Alert>
20+
sadfasdf
21+
</Alert>
22+
1223
### GCP
1324
You can also configure plan outputs to be uploaded to GCP Buckets instead. This is handy in case you want your plan outputs to stay within your organisation's approved storage providers for security or compliance reasons.
1425

docs/mint.json

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,6 @@
7171
"ce/features/concurrency",
7272
"ce/features/layering",
7373
"ce/features/pr-level-locks",
74-
"ce/features/plan-persistence",
7574
"ce/features/private-runners",
7675
"ce/features/drift-detection",
7776
"ce/features/rbac",
@@ -104,7 +103,7 @@
104103
"ce/howto/policy-overrides",
105104
"ce/howto/project-level-roles",
106105
"ce/howto/segregate-cloud-accounts",
107-
"ce/howto/store-plans-in-a-bucket",
106+
"ce/howto/plan-artefacts",
108107
"ce/howto/trigger-directly",
109108
"ce/howto/using-checkov",
110109
"ce/howto/using-infracost",

0 commit comments

Comments
 (0)