You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- Explain the changes introduced in your PR -->
Replaced mention of AWS RDS with Google Cloud SQL on the page for how to
install in Google Cloud.
Fixed error
## Pull Request approval
Although pull request approval is not enforced for this repository in
order to reduce friction, merging without a review will generate a
ticket for the docs team to review your changes. So if possible, have
your pull request approved before merging.
Copy file name to clipboardExpand all lines: docs/admin/deploy/docker-compose/google_cloud.mdx
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,14 +27,14 @@ Click **Create Instance** in your [Google Cloud Compute Engine Console](https://
27
27
1. Expand the **Advanced options** section and the **Disks** section within to add an additional disk to store data from the Sourcegraph Docker instance.
28
28
29
29
1. Click **+ ADD NEW DISK** to setup the new disk with the following settings:
*`Description`: "Disk for storing Docker data for Sourcegraph" (or something similarly descriptive)
32
32
*`Disk source type`: Blank disk
33
33
*`Disk type`: SSD persistent disk
34
34
*`Size`: `250GB` minimum
35
-
* Sourcegraph needs at least as much space as all your repositories combined take up
36
-
*Allocating as much disk space as you can upfront minimize the need for [resizing this disk](https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd) later on
37
-
*`(optional, recommended) Snapshot schedule`: The most straightfoward way of automatically backing Sourcegraph's data is to set up a [snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots) for this disk. We strongly recommend that you take the time to do so here.
35
+
* Sourcegraph recommends 3x the storage space of all your repos combined, as it needs to store the repos, indexes, databases, etc.
36
+
*Allocate as much disk space as you can upfront to reduce the need to later [resize this disk](https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd)
37
+
*`(Recommended) Snapshot schedule`: We strongly recommend that you configure a [snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots) for this disk.
38
38
*`Attachment settings - Mode`: Read/write
39
39
*`Attachment settings - Deletion rule`: Keep disk
40
40
@@ -155,9 +155,9 @@ Please refer to the [Docker Compose upgrade docs](/admin/deploy/docker-compose/u
155
155
156
156
Data is persisted within a [Docker volume](https://docs.docker.com/storage/volumes/) as defined in the [deployment repository](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml). The startup script configures Docker using a [daemon configuration file](https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file) to store all the data on the attached data volume, which is mounted at `/mnt/docker-data`, where volumes are stored within `/mnt/docker-data/volumes`.
157
157
158
-
The most straightforward method to backup the data is to [snapshot the entire `/mnt/docker-data` volume](https://cloud.google.com/compute/docs/disks/create-snapshots) automatically using a [snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots). You can also [set up a snapshot snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots) after your instance has been created.
158
+
The most straightforward method to backup the data is to [snapshot the entire `/mnt/docker-data` volume](https://cloud.google.com/compute/docs/disks/create-snapshots) automatically using a [snapshot schedule](https://cloud.google.com/compute/docs/disks/scheduled-snapshots).
159
159
160
-
<spanclass="badge badge-note">RECOMMENDED</span> Using an external Postgres service such as [AWS RDS for PostgreSQL](https://aws.amazon.com/rds/)takes care of backing up all the user data for you. If the Sourcegraph instance ever dies or gets destroyed, creating a fresh new instance connected to the old external Postgres service will get Sourcegraph back to its previous state.
160
+
<spanclass="badge badge-note">RECOMMENDED</span> Using an external Postgres service such as Google Cloud SQL takes care of backing up user data for you. If the Sourcegraph instance ever dies or gets destroyed, creating a fresh new instance connected to the old external Postgres service will get Sourcegraph back to its previous state.
0 commit comments