Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 52 additions & 5 deletions docs/admin/external_services/object_storage.mdx
Original file line number Diff line number Diff line change
@@ -1,16 +1,21 @@
# Using a managed object storage service (S3 or GCS)

By default, Sourcegraph will use a `sourcegraph/blobstore` server bundled with the instance to temporarily store code graph indexes uploaded by users.
By default, Sourcegraph will use a `sourcegraph/blobstore` server bundled with the instance to temporarily store [code graph indexes](../../code-search/code-navigation/precise_code_navigation) uploaded by users as well as the results of [search jobs](../../code-search/types/search-jobs).

You can alternatively configure your instance to instead store this data in an S3 or GCS bucket. Doing so may decrease your hosting costs as persistent volumes are often more expensive than the same storage space in an object store service.

To target a managed object storage service, you will need to set a handful of environment variables for configuration and authentication to the target service. **If you are running a sourcegraph/server deployment, set the environment variables on the server container. Otherwise, if running via Docker-compose or Kubernetes, set the environment variables on the `frontend`, `worker`, and `precise-code-intel-worker` containers.**
## Code Graph Indexes

To target a managed object storage service for storing [code graph index uploads](../../code-search/code-navigation/precise_code_navigation), you will need to set a handful of environment variables for configuration and authentication to the target service.

- If you are running a `sourcegraph/server` deployment, set the environment variables on the server container
- If you are running via Docker-compose or Kubernetes, set the environment variables on the `frontend`, `worker`, and `precise-code-intel-worker` containers

### Using S3

To target an S3 bucket you've already provisioned, set the following environment variables. Authentication can be done through [an access and secret key pair](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) (and optional session token), or via the EC2 metadata API.

**_Warning:_** Remember never to commit aws access keys in git. Consider using a secret handling service offered by your cloud provider.
<Callout type="warning"> Never commit AWS access keys in Git. You should consider using a secret handling service offered by your cloud provider. </Callout>

- `PRECISE_CODE_INTEL_UPLOAD_BACKEND=S3`
- `PRECISE_CODE_INTEL_UPLOAD_BUCKET=<my bucket name>`
Expand All @@ -21,9 +26,9 @@ To target an S3 bucket you've already provisioned, set the following environment
- `PRECISE_CODE_INTEL_UPLOAD_AWS_USE_EC2_ROLE_CREDENTIALS=true` (optional; set to use EC2 metadata API over static credentials)
- `PRECISE_CODE_INTEL_UPLOAD_AWS_REGION=us-east-1` (default)

**_Note:_** If a non-default region is supplied, ensure that the subdomain of the endpoint URL (_the `AWS_ENDPOINT` value_) matches the target region.
<Callout type="note"> If a non-default region is supplied, ensure that the subdomain of the endpoint URL (_the `AWS_ENDPOINT` value_) matches the target region. </Callout>

> NOTE: You don't need to set the `PRECISE_CODE_INTEL_UPLOAD_AWS_ACCESS_KEY_ID` environment variable when using `PRECISE_CODE_INTEL_UPLOAD_AWS_USE_EC2_ROLE_CREDENTIALS=true` because role credentials will be automatically resolved. Attach the IAM role to the EC2 instances hosting the `frontend`, `worker`, and `precise-code-intel-worker` containers in a multi-node environment.
<Callout type="tip"> You don't need to set the `PRECISE_CODE_INTEL_UPLOAD_AWS_ACCESS_KEY_ID` environment variable when using `PRECISE_CODE_INTEL_UPLOAD_AWS_USE_EC2_ROLE_CREDENTIALS=true` because role credentials will be automatically resolved. Attach the IAM role to the EC2 instances hosting the `frontend`, `worker`, and `precise-code-intel-worker` containers in a multi-node environment. </Callout>


### Using GCS
Expand All @@ -42,3 +47,45 @@ If you would like to allow your Sourcegraph instance to control the creation and

- `PRECISE_CODE_INTEL_UPLOAD_MANAGE_BUCKET=true`
- `PRECISE_CODE_INTEL_UPLOAD_TTL=168h` (default)

## Search Job Results

To target a third party managed object storage service for storing [search job results](../../code-search/types/search-jobs), you must set a handful of environment variables for configuration and authentication to the target service.

- If you are running a `sourcegraph/server` deployment, set the environment variables on the server container
- If you are running via Docker-compose or Kubernetes, set the environment variables on the `frontend` and `worker` containers

### Using S3

Set the following environment variables to target an S3 bucket you've already provisioned. Authentication can be done through [an access and secret key pair](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) (and optionally through session token) or via the EC2 metadata API.

<Callout type="warning"> Never commit AWS access keys in Git. You should consider using a secret handling service offered by your cloud provider.</Callout>

- `SEARCH_JOBS_UPLOAD_BACKEND=S3`
- `SEARCH_JOBS_UPLOAD_BUCKET=<my bucket name>`
- `SEARCH_JOBS_UPLOAD_AWS_ENDPOINT=https://s3.us-east-1.amazonaws.com`
- `SEARCH_JOBS_UPLOAD_AWS_ACCESS_KEY_ID=<your access key>`
- `SEARCH_JOBS_UPLOAD_AWS_SECRET_ACCESS_KEY=<your secret key>`
- `SEARCH_JOBS_UPLOAD_AWS_SESSION_TOKEN=<your session token>` (optional)
- `SEARCH_JOBS_UPLOAD_AWS_USE_EC2_ROLE_CREDENTIALS=true` (optional; set to use EC2 metadata API over static credentials)
- `SEARCH_JOBS_UPLOAD_AWS_REGION=us-east-1` (default)

<Callout type="note"> If a non-default region is supplied, ensure that the subdomain of the endpoint URL (the `AWS_ENDPOINT` value) matches the target region.</Callout>

<Callout type="tip"> You don't need to set the `SEARCH_JOBS_UPLOAD_AWS_ACCESS_KEY_ID` environment variable when using `SEARCH_JOBS_UPLOAD_AWS_USE_EC2_ROLE_CREDENTIALS=true` because role credentials will be automatically resolved.</Callout>

### Using GCS

Set the following environment variables to target a GCS bucket you've already provisioned. Authentication is done through a service account key, either as a path to a volume-mounted file or the contents read in as an environment variable payload.

- `SEARCH_JOBS_UPLOAD_BACKEND=GCS`
- `SEARCH_JOBS_UPLOAD_BUCKET=<my bucket name>`
- `SEARCH_JOBS_UPLOAD_GCP_PROJECT_ID=<my project id>`
- `SEARCH_JOBS_UPLOAD_GOOGLE_APPLICATION_CREDENTIALS_FILE=</path/to/file>`
- `SEARCH_JOBS_UPLOAD_GOOGLE_APPLICATION_CREDENTIALS_FILE_CONTENT=<{"my": "content"}>`

### Provisioning buckets

If you would like to allow your Sourcegraph instance to control the creation and lifecycle configuration management of the target buckets, set the following environment variables:

- `SEARCH_JOBS_UPLOAD_MANAGE_BUCKET=true`
40 changes: 1 addition & 39 deletions docs/code-search/types/search-jobs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -40,45 +40,7 @@ Search Jobs requires an object storage to store the results of your search jobs.
By default, Search Jobs stores results using our bundled `blobstore` service.
If the `blobstore` service is deployed, and you want to use it to store results from Search Jobs, you don't need to configure anything.

To use a third party managed object storage service, you must set a handful of environment variables for configuration and authentication to the target service.

- If you are running a `sourcegraph/server` deployment, set the environment variables on the server container
- If you are running via Docker-compose or Kubernetes, set the environment variables on the `frontend` and `worker` containers

### Using S3

Set the following environment variables to target an S3 bucket you've already provisioned. Authentication can be done through [an access and secret key pair](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) (and optionally through session token) or via the EC2 metadata API.

<Callout type="tip"> Never commit AWS access keys in Git. You should consider using a secret handling service offered by your cloud provider.</Callout>

- `SEARCH_JOBS_UPLOAD_BACKEND=S3`
- `SEARCH_JOBS_UPLOAD_BUCKET=<my bucket name>`
- `SEARCH_JOBS_UPLOAD_AWS_ENDPOINT=https://s3.us-east-1.amazonaws.com`
- `SEARCH_JOBS_UPLOAD_AWS_ACCESS_KEY_ID=<your access key>`
- `SEARCH_JOBS_UPLOAD_AWS_SECRET_ACCESS_KEY=<your secret key>`
- `SEARCH_JOBS_UPLOAD_AWS_SESSION_TOKEN=<your session token>` (optional)
- `SEARCH_JOBS_UPLOAD_AWS_USE_EC2_ROLE_CREDENTIALS=true` (optional; set to use EC2 metadata API over static credentials)
- `SEARCH_JOBS_UPLOAD_AWS_REGION=us-east-1` (default)

<Callout type="note"> If a non-default region is supplied, ensure that the subdomain of the endpoint URL (the `AWS_ENDPOINT` value) matches the target region.</Callout>

<Callout type="note"> You don't need to set the `SEARCH_JOBS_UPLOAD_AWS_ACCESS_KEY_ID` environment variable when using `SEARCH_JOBS_UPLOAD_AWS_USE_EC2_ROLE_CREDENTIALS=true` because role credentials will be automatically resolved.</Callout>

### Using GCS

Set the following environment variables to target a GCS bucket you've already provisioned. Authentication is done through a service account key, either as a path to a volume-mounted file or the contents read in as an environment variable payload.

- `SEARCH_JOBS_UPLOAD_BACKEND=GCS`
- `SEARCH_JOBS_UPLOAD_BUCKET=<my bucket name>`
- `SEARCH_JOBS_UPLOAD_GCP_PROJECT_ID=<my project id>`
- `SEARCH_JOBS_UPLOAD_GOOGLE_APPLICATION_CREDENTIALS_FILE=</path/to/file>`
- `SEARCH_JOBS_UPLOAD_GOOGLE_APPLICATION_CREDENTIALS_FILE_CONTENT=<{"my": "content"}>`

### Provisioning buckets

If you would like to allow your Sourcegraph instance to control the creation and lifecycle configuration management of the target buckets, set the following environment variables:

- `SEARCH_JOBS_UPLOAD_MANAGE_BUCKET=true`
To use a third party managed object storage service, see instructions in [externalizing object storage](../../admin/external_services/object_storage#search-job-results).

## Supported result types

Expand Down