Skip to content
This repository was archived by the owner on Sep 12, 2018. It is now read-only.

Commit 75466ca

Browse files
committed
Add ceph-s3 config section; clean up Ceph documentation
1 parent df78250 commit 75466ca

File tree

2 files changed

+27
-27
lines changed

2 files changed

+27
-27
lines changed

README.md

Lines changed: 10 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ The fastest way to get running:
2525
That will use the
2626
[official image from the Docker index](https://registry.hub.docker.com/_/registry/).
2727

28-
Here is another example that will launch a container on port 5000, and store images in an Amazon S3 bucket:
28+
Here is another example that will launch a container on port 5000, and store images in an Amazon S3 bucket:
2929
```
3030
docker run \
3131
-e SETTINGS_FLAVOR=s3 \
@@ -62,6 +62,7 @@ In the `config_sample.yml` file, you'll see several sample flavors:
6262
1. `common`: used by all other flavors as base settings
6363
1. `local`: stores data on the local filesystem
6464
1. `s3`: stores data in an AWS S3 bucket
65+
1. `ceph-s3`: stores data in a Ceph cluster via a Ceph Object Gateway, using the S3 API
6566
1. `dev`: basic configuration using the `local` flavor
6667
1. `test`: used by unit tests
6768
1. `prod`: production configuration (basically a synonym for the `s3` flavor)
@@ -156,9 +157,9 @@ When using the `config_sample.yml`, you can pass all options through as environm
156157
1. `boto_host`/`boto_port`: If you are using `storage: s3` the
157158
[standard boto config file locations](http://docs.pythonboto.org/en/latest/boto_config_tut.html#details)
158159
(`/etc/boto.cfg, ~/.boto`) will be used. If you are using a
159-
*non*-Amazon S3-compliant object store, in one of the boto config files'
160+
*non*-Amazon S3-compliant object store (such as Ceph), in one of the boto config files'
160161
`[Credentials]` section, set `boto_host`, `boto_port` as appropriate for the
161-
service you are using.
162+
service you are using. Alternatively, set `boto_host` and `boto_port` in the config file.
162163
1. `bugsnag`: The bugsnag API key (note that if you don't use the official docker container, you need to install the registry with bugsnag enabled: `pip install docker-registry[bugsnag]`)
163164
164165
### Authentication options
@@ -184,7 +185,7 @@ When using the `config_sample.yml`, you can pass all options through as environm
184185
##### Generating keys with `openssl`
185186
186187
You will need to install the python-rsa package (`pip install rsa`) in addition to using `openssl`.
187-
Generating the public key using openssl will lead to producing a key in a format not supported by
188+
Generating the public key using openssl will lead to producing a key in a format not supported by
188189
the RSA library the registry is using.
189190
190191
Generate private key:
@@ -204,7 +205,7 @@ can configure the backend with a configuration like:
204205
205206
The `search_backend` setting selects the search backend to use. If
206207
`search_backend` is empty, no index is built, and the search endpoint always
207-
returns empty results.
208+
returns empty results.
208209
209210
1. `search_backend`: The name of the search backend engine to use.
210211
Currently supported backends are:
@@ -370,23 +371,6 @@ prod:
370371
s3_secret_key: xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T
371372
```
372373

373-
Example *non*-Amazon S3-compliant object store (e.g. Ceph and Riak CS):
374-
```yaml
375-
prod:
376-
storage: s3
377-
s3_bucket: acme-docker
378-
s3_secure: false
379-
s3_encrypt: false
380-
storage_path: /registry
381-
s3_access_key: AKIAHSHB43HS3J92MXZ
382-
s3_secret_key: xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T
383-
boto_host: myowns3.com
384-
boto_port: 80
385-
boto_debug: true
386-
boto_calling_format: OrdinaryCallingFormat
387-
388-
```
389-
390374
Run the Registry
391375
----------------
392376

@@ -417,10 +401,10 @@ is already taken, find out which container is already using it by running `docke
417401
```
418402
docker run \
419403
-e SETTINGS_FLAVOR=s3 \
420-
-e AWS_BUCKET=acme-docker \
404+
-e AWS_BUCKET=mybucket \
421405
-e STORAGE_PATH=/registry \
422-
-e AWS_KEY=AKIAHSHB43HS3J92MXZ \
423-
-e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T \
406+
-e AWS_KEY=myawskey \
407+
-e AWS_SECRET=myawssecret \
424408
-e SEARCH_BACKEND=sqlalchemy \
425409
-p 5000:5000 \
426410
-p AWS_HOST=myowns3.com \
@@ -495,7 +479,7 @@ behind a nginx server which supports chunked transfer-encoding (nginx >= 1.3.9).
495479
496480
#### nginx
497481
498-
[Here is an nginx configuration file example.](https://github.com/docker/docker-registry/blob/master/contrib/nginx/nginx.conf), which applies to versions < 1.3.9 which are compiled with the [HttpChunkinModule](http://wiki.nginx.org/HttpChunkinModule).
482+
[Here is an nginx configuration file example.](https://github.com/docker/docker-registry/blob/master/contrib/nginx/nginx.conf), which applies to versions < 1.3.9 which are compiled with the [HttpChunkinModule](http://wiki.nginx.org/HttpChunkinModule).
499483
500484
[This is another example nginx configuration file](https://github.com/docker/docker-registry/blob/master/contrib/nginx/nginx_1-3-9.conf) that applies to versions of nginx greater than 1.3.9 that have support for the chunked_transfer_encoding directive.
501485

config/config_sample.yml

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,23 @@ s3: &s3
8686
s3_secret_key: _env:AWS_SECRET
8787
boto_host: _env:AWS_HOST
8888
boto_port: _env:AWS_PORT
89-
boto_debug: _env:AWS_DEBUG
89+
90+
# Ceph Object Gateway Configuration
91+
# See http://ceph.com/docs/master/radosgw/ for details on installing this service.
92+
ceph-s3: &ceph-s3
93+
<<: *common
94+
storage: s3
95+
s3_region: ~
96+
s3_bucket: _env:AWS_BUCKET
97+
s3_encrypt: _env:AWS_ENCRYPT:false
98+
s3_secure: _env:AWS_SECURE:false
99+
storage_path: _env:STORAGE_PATH:/registry
100+
s3_access_key: _env:AWS_KEY
101+
s3_secret_key: _env:AWS_SECRET
102+
boto_bucket: _env:AWS_BUCKET
103+
boto_host: _env:AWS_HOST
104+
boto_port: _env:AWS_PORT
105+
boto_debug: _env:AWS_DEBUG:0
90106
boto_calling_format: _env:AWS_CALLING_FORMAT
91107

92108
# Google Cloud Storage Configuration

0 commit comments

Comments
 (0)