Skip to content

Commit cb75c18

Browse files
committed
improve pr locks documentation
1 parent a25268e commit cb75c18

File tree

2 files changed

+25
-6
lines changed

2 files changed

+25
-6
lines changed

docs/ce/cloud-providers/aws.mdx

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
title: "Setting up DynamoDB Access for locks"
3-
description: "Digger runs without a backend but uses a DynamoDB table to keep track of all the locks that are necessary for locking PR projects. On the first run in your AWS account digger checks for the presence of `DiggerDynamoDBLockTable` and it requires the following policy for the DynamoDB access:"
3+
description: "Digger runs without a backend but uses a DynamoDB table to keep track of all the locks that are necessary for locking PR projects. On the first run in your AWS account digger checks for the presence of `DiggerDynamoDBLockTable`. If the dynamoDB table with that name is not present it will automaticlaly create it.
4+
It requires the following policy for the DynamoDB access:"
45
---
56

67
```

docs/ce/features/pr-level-locks.mdx

Lines changed: 23 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,30 @@
22
title: "PR-level locks"
33
---
44

5-
* For every pull request we perform a lock when the pull request is opened and unlocked when the pull request is merged, this is to avoid making a plan preview stale
5+
For every pull request we perform a lock when the pull request is opened and unlocked when the pull request is merged, this is to avoid making a single apply override another apply in a different PR.
6+
Since digger is primarily being used to apply while the PR is open this locking guarantees that no two PRs can wipe off eachother's changes due to human errors.
7+
When digger is using with a backend the locks are stored in the database directly in a table called digger_locks. No further configuration is needed.
68

7-
* For GCP locking is performed using buckets that are strongly consistent: [https://github.com/diggerhq/digger/blob/80289922227f225d887feb74749b4daef8b441f8/pkg/gcp/gcp\_lock.go#L13](https://github.com/diggerhq/digger/blob/80289922227f225d887feb74749b4daef8b441f8/pkg/gcp/gcp%5Flock.go#L13)
9+
## Disabling PR-level locks
810

9-
* These options are configured and the locking can be disabled entirely if it is not needed
11+
In order to disable locking repo wide you can add a top-level flag to your digger.yml:
1012

11-
* The locking interface is very simple and is based on `Lock()` and `Unlock()` Operations [https://github.com/diggerhq/digger/blob/5815775095d7380281c71c7c3aa63ca1b374365f/pkg/locking/locking.go#L40](https://github.com/diggerhq/digger/blob/5815775095d7380281c71c7c3aa63ca1b374365f/pkg/locking/locking.go#L40)
13+
```
14+
pr_locks: false
1215
13-
* A pull request acquires a lock for every project impacted by this PR and all dependant projects
16+
projects:
17+
- name: dev
18+
dir: dev/
19+
```
20+
21+
## Backendless mode
22+
23+
When using digger in backendless mode there is no backend or DB to store the locks information. In this case we have implemented integrations with several
24+
cloud provider resources to store the state of PR locks. The table below summarises the different locking methods available when using backendless mode:
25+
26+
27+
| Cloud Provider | Resource Type | Configuration details Link |
28+
|----------------|-----------------|------|
29+
| AWS | DynamoDB | [here](/ce/cloud-providers/aws) |
30+
| GCP | GCP Bucket | [here](/ce/gcp/using-gcp-bucket-for-locks) |
31+
| Azure | Storage Tables | [here](/ce/azure-specific/azure-devops-locking-connection-methods) |

0 commit comments

Comments
 (0)