Skip to content

Commit b2ca8a4

Browse files
authored
Fix errors (#686)
<!-- Explain the changes introduced in your PR --> Replaced mention of AWS RDS with Google Cloud SQL on the page for how to install in Google Cloud. Fixed error ## Pull Request approval Although pull request approval is not enforced for this repository in order to reduce friction, merging without a review will generate a ticket for the docs team to review your changes. So if possible, have your pull request approved before merging.
1 parent 5e6dbff commit b2ca8a4

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

docs/admin/deploy/docker-compose/google_cloud.mdx

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -27,14 +27,14 @@ Click **Create Instance** in your [Google Cloud Compute Engine Console](https://
2727
1. Expand the **Advanced options** section and the **Disks** section within to add an additional disk to store data from the Sourcegraph Docker instance.
2828

2929
1. Click **+ ADD NEW DISK** to setup the new disk with the following settings:
30-
* `Name`: "sourcegraph-docker-disk" (or something similarly descriptive)
30+
* `Name`: "sourcegraph-docker-data" (or something similarly descriptive)
3131
* `Description`: "Disk for storing Docker data for Sourcegraph" (or something similarly descriptive)
3232
* `Disk source type`: Blank disk
3333
* `Disk type`: SSD persistent disk
3434
* `Size`: `250GB` minimum
35-
* Sourcegraph needs at least as much space as all your repositories combined take up
36-
* Allocating as much disk space as you can upfront minimize the need for [resizing this disk](https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd) later on
37-
* `(optional, recommended) Snapshot schedule`: The most straightfoward way of automatically backing Sourcegraph's data is to set up a [snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots) for this disk. We strongly recommend that you take the time to do so here.
35+
* Sourcegraph recommends 3x the storage space of all your repos combined, as it needs to store the repos, indexes, databases, etc.
36+
* Allocate as much disk space as you can upfront to reduce the need to later [resize this disk](https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd)
37+
* `(Recommended) Snapshot schedule`: We strongly recommend that you configure a [snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots) for this disk.
3838
* `Attachment settings - Mode`: Read/write
3939
* `Attachment settings - Deletion rule`: Keep disk
4040

@@ -155,9 +155,9 @@ Please refer to the [Docker Compose upgrade docs](/admin/deploy/docker-compose/u
155155

156156
Data is persisted within a [Docker volume](https://docs.docker.com/storage/volumes/) as defined in the [deployment repository](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml). The startup script configures Docker using a [daemon configuration file](https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file) to store all the data on the attached data volume, which is mounted at `/mnt/docker-data`, where volumes are stored within `/mnt/docker-data/volumes`.
157157

158-
The most straightforward method to backup the data is to [snapshot the entire `/mnt/docker-data` volume](https://cloud.google.com/compute/docs/disks/create-snapshots) automatically using a [snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots). You can also [set up a snapshot snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots) after your instance has been created.
158+
The most straightforward method to backup the data is to [snapshot the entire `/mnt/docker-data` volume](https://cloud.google.com/compute/docs/disks/create-snapshots) automatically using a [snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots).
159159

160-
<span class="badge badge-note">RECOMMENDED</span> Using an external Postgres service such as [AWS RDS for PostgreSQL](https://aws.amazon.com/rds/) takes care of backing up all the user data for you. If the Sourcegraph instance ever dies or gets destroyed, creating a fresh new instance connected to the old external Postgres service will get Sourcegraph back to its previous state.
160+
<span class="badge badge-note">RECOMMENDED</span> Using an external Postgres service such as Google Cloud SQL takes care of backing up user data for you. If the Sourcegraph instance ever dies or gets destroyed, creating a fresh new instance connected to the old external Postgres service will get Sourcegraph back to its previous state.
161161

162162
---
163163

0 commit comments

Comments
 (0)