You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ce/cloud-providers/aws.mdx
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,7 @@
1
1
---
2
2
title: "Setting up DynamoDB Access for locks"
3
-
description: "Digger runs without a backend but uses a DynamoDB table to keep track of all the locks that are necessary for locking PR projects. On the first run in your AWS account digger checks for the presence of `DiggerDynamoDBLockTable` and it requires the following policy for the DynamoDB access:"
3
+
description: "Digger runs without a backend but uses a DynamoDB table to keep track of all the locks that are necessary for locking PR projects. On the first run in your AWS account digger checks for the presence of `DiggerDynamoDBLockTable`. If the dynamoDB table with that name is not present it will automaticlaly create it.
4
+
It requires the following policy for the DynamoDB access:"
Copy file name to clipboardExpand all lines: docs/ce/features/pr-level-locks.mdx
+23-5Lines changed: 23 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,30 @@
2
2
title: "PR-level locks"
3
3
---
4
4
5
-
* For every pull request we perform a lock when the pull request is opened and unlocked when the pull request is merged, this is to avoid making a plan preview stale
5
+
For every pull request we perform a lock when the pull request is opened and unlocked when the pull request is merged, this is to avoid making a single apply override another apply in a different PR.
6
+
Since digger is primarily being used to apply while the PR is open this locking guarantees that no two PRs can wipe off eachother's changes due to human errors.
7
+
When digger is using with a backend the locks are stored in the database directly in a table called digger_locks. No further configuration is needed.
6
8
7
-
* For GCP locking is performed using buckets that are strongly consistent: [https://github.com/diggerhq/digger/blob/80289922227f225d887feb74749b4daef8b441f8/pkg/gcp/gcp\_lock.go#L13](https://github.com/diggerhq/digger/blob/80289922227f225d887feb74749b4daef8b441f8/pkg/gcp/gcp%5Flock.go#L13)
9
+
## Disabling PR-level locks
8
10
9
-
* These options are configured and the locking can be disabled entirely if it is not needed
11
+
In order to disable locking repo wide you can add a top-level flag to your digger.yml:
10
12
11
-
* The locking interface is very simple and is based on `Lock()` and `Unlock()` Operations [https://github.com/diggerhq/digger/blob/5815775095d7380281c71c7c3aa63ca1b374365f/pkg/locking/locking.go#L40](https://github.com/diggerhq/digger/blob/5815775095d7380281c71c7c3aa63ca1b374365f/pkg/locking/locking.go#L40)
13
+
```
14
+
pr_locks: false
12
15
13
-
* A pull request acquires a lock for every project impacted by this PR and all dependant projects
16
+
projects:
17
+
- name: dev
18
+
dir: dev/
19
+
```
20
+
21
+
## Backendless mode
22
+
23
+
When using digger in backendless mode there is no backend or DB to store the locks information. In this case we have implemented integrations with several
24
+
cloud provider resources to store the state of PR locks. The table below summarises the different locking methods available when using backendless mode:
25
+
26
+
27
+
| Cloud Provider | Resource Type | Configuration details Link |
Copy file name to clipboardExpand all lines: docs/ce/howto/custom-commands.mdx
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,6 +40,11 @@ The value of `$DIGGER_OUT` defaults to `$RUNNER_TEMP/digger-out.log`; you can ch
40
40
41
41
## Overriding plan commands
42
42
43
+
<Note>
44
+
This is an advanced usecase if you want to specify an entirely custom command during plan and apply phases. If you only want to use
45
+
plan artefacts the easiest way would be to simply configure [plan artefacts persistence](/ce/howto/plan-artefacts) and you would not need to perform
46
+
a complete override of the plan and apply commands.
47
+
</Note>
43
48
You can add extra arguments to the plan command by setting the `extra_args` key in the `steps` section of the `plan` command.
44
49
45
50
However in some cases if you wish to override the plan command entirely you can do it by excluding the plan in the steps and having your command specified in the run like so:
Copy file name to clipboardExpand all lines: docs/ce/howto/plan-artefacts.mdx
+12-1Lines changed: 12 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,25 @@
1
1
---
2
-
title: "Store plans in a Bucket"
2
+
title: "Plan artefacts"
3
3
---
4
4
5
+
Digger can be used to store plan artefacts during the plan phase and make them available for use during the apply phase. Once plan artefacts is configured and uploaded during the plan phase
6
+
it will automatically be used during the apply phase. Once one of the below artefacts is configured successfully there is no additional configuration needed on your end to achieve apply-time artefact reuse.
7
+
5
8
### Github
6
9
Digger can use Github Artifacts to store `terraform plan` outputs. In order to enable it you can set the following argument in digger_workflow.yml:
7
10
8
11
```
9
12
upload-plan-destination: github
10
13
```
11
14
15
+
<Warning>
16
+
Github plan artefacts as a destination is currently deprecated due to a change in Github API. For more information see [this issue](https://github.com/diggerhq/digger/issues/1702).
17
+
</Warning>
18
+
19
+
<Alert>
20
+
sadfasdf
21
+
</Alert>
22
+
12
23
### GCP
13
24
You can also configure plan outputs to be uploaded to GCP Buckets instead. This is handy in case you want your plan outputs to stay within your organisation's approved storage providers for security or compliance reasons.
0 commit comments