Skip to content

Commit b1ff963

Browse files
BookWyrm Botgithub-actions[bot]
authored andcommitted
deploy versions
1 parent 4f80066 commit b1ff963

File tree

177 files changed

+19991
-961
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

177 files changed

+19991
-961
lines changed

locale/de_DE/content/reference/environment.md

Lines changed: 602 additions & 0 deletions
Large diffs are not rendered by default.

locale/de_DE/content/running_bookwyrm/external-storage.md

Lines changed: 38 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
- - -
2-
Title: Externer Speicher Date: 2021-06-07 Order: 7
2+
Title: External Storage Date: 2021-06-07 Order: 8
33
- - -
44

55
Standardmäßig verwendet BookWyrm lokalen Speicher für statische Assets (Favicon, Standard-Avatar, etc.) und Medien (Benutzer-Avatare, Buchtitelbilder usw.), aber Sie können einen externen Speicherdienst verwenden, um diese Dateien zu bereitzustellen. BookWyrm verwendet `django-storages`, um externen Speicher wie S3-kompatible Dienste, Apache Libcloud oder SFTP anzubinden.
@@ -36,12 +36,24 @@ Bearbeite die `.env`-Datei, indem du die folgenden Zeilen auskommentierst:
3636
- `AWS_ACCESS_KEY_ID`: Ihre Zugangsschlüssel-ID
3737
- `AWS_SECRET_ACCESS_KEY`: Ihr geheimer Zugangsschlüssel
3838
- `AWS_STORAGE_BUCKET_NAME`: Ihr Bucket Name
39-
- `AWS_S3_REGION_NAME`: z.B. `"eu-west-1"` für AWS, `"fr-par"` für Scaleway oder `"nyc3"` für Digital Ocean
39+
- `AWS_S3_REGION_NAME`: e.g. `"eu-west-1"` for AWS, `"fr-par"` for Scaleway, `"nyc3"` for Digital Ocean or `"cluster-id"` for Linode
4040

4141
Wenn dein S3-kompatibler Dienst Amazon AWS ist, solltest du startklar sein. Wenn nicht, musst du folgende Zeilen wieder kommentieren:
4242

43-
- `AWS_S3_CUSTOM_DOMAIN`: die Domain, die die Assets bereitstellen soll, z.B. `"example-bucket-name.s3.fr-par.scw.cloud"` oder `"${AWS_STORAGE_BUCKET_NAME}.${AWS_S3_REGION_NAME}.digitaloceanspaces.com"`
44-
- `AWS_S3_ENDPOINT_URL`: der S3-API-Endpunkt, z.B. `"https://s3.fr-par.scw.cloud"` oder `"https://${AWS_S3_REGION_NAME}.digitaloceanspaces.com"`
43+
- `AWS_S3_CUSTOM_DOMAIN`: the domain that will serve the assets:
44+
- for Scaleway, e.g. `"example-bucket-name.s3.fr-par.scw.cloud"`
45+
- for Digital Ocean, e.g. `"${AWS_STORAGE_BUCKET_NAME}.${AWS_S3_REGION_NAME}.digitaloceanspaces.com"`
46+
- for Linode Object Storage, this should be set to the cluster domain, e.g. `"eu-central-1.linodeobjects.com"`
47+
- `AWS_S3_ENDPOINT_URL`: the S3 API endpoint:
48+
- for Scaleway, e.g. `"https://s3.fr-par.scw.cloud"`
49+
- for Digital Ocean, e.g. `"https://${AWS_S3_REGION_NAME}.digitaloceanspaces.com"`
50+
- For Linode Object Storage, set this to the cluster domain, e.g. `"https://eu-central-1.linodeobjects.com"`
51+
52+
For many S3 compatible services, the default `ACL` is `"public-read"`, and this is what BookWyrm defaults to. If you are using Backblaze (B2) you need to explicitly set the default ACL to be empty in your `.env` file:
53+
54+
```
55+
AWS_DEFAULT_ACL=""
56+
```
4557

4658
### Kopieren lokaler Medien auf externen Speicher
4759

@@ -61,17 +73,14 @@ Um den S3-kompatiblen externen Speicher zu aktivieren, müssen Sie Ihre `.env`-D
6173
USE_S3=true
6274
```
6375

64-
Wenn der externe Speicher über HTTPS ausgeliefert wird (was die meisten dieser Tage sind), musst du auch sicherstellen, dass `USE_HTTPS` auf `true gesetzt ist`, damit Bilder über HTTPS geladen werden:
65-
66-
```
67-
USE_HTTPS=true
68-
```
76+
**Note** that after `v0.7.5` all traffic is assumed to be HTTPS, so you need to ensure that your external storage is also served over HTTPS.
6977

7078
#### Statische Assets
7179

72-
Danach musst du folgenden Befehl ausführen, um die statischen Assets in deinen S3-Bucket zu kopieren:
80+
Then, you will need to run the following commands to compile the themes and copy all static assets to your S3 bucket:
7381

7482
```bash
83+
./bw-dev compile_themes
7584
./bw-dev collectstatic
7685
```
7786

@@ -109,7 +118,25 @@ Führe dann den folgenden Befehl aus:
109118

110119
Keine Ausgabe bedeutet, dass es gut sein sollte.
111120

112-
Wenn du eine neue BookWyrm-Instanz startest, kannst du sofort zu den Installationsanweisungen zurückkehren. Wenn nicht, lese weiter.
121+
### Additional Step for Linode Object Storage Users
122+
123+
For Linode, you now need to make an alteration to the `.env` to ensure that the generated links to your storage objects are correct. If you miss this step, all the links to images and static files (like css) will be broken. To fix this, you need to now insert the bucket-name into the `AWS_S3_CUSTOM_DOMAIN`, for example if your `AWS_STORAGE_BUCKET_NAME` is `"my-bookwyrm-bucket"`, then set it to:
124+
125+
```
126+
AWS_S3_CUSTOM_DOMAIN=my-bookwyrm-bucket.cluster-id.linodeobjects.com
127+
```
128+
129+
*Note*: From this point on, any bw-dev copy or sync commands will place objects into an incorrect location in your object store, so if you need to use them, revert to the previous setting, run and re-enable.
130+
131+
### User export and import files
132+
133+
After `v0.7.5`, user export and import files are saved to local storage even if `USE_S3` is set to `true`. Generally it is safer to use local storage for these files, and keep your used storage under control by setting up the task to periodically delete old export and import files.
134+
135+
If you are running a large instance you may prefer to use S3 for these files as well. If so, you will need to set the environment variable `USE_S3_FOR_EXPORTS` to `true`.
136+
137+
### New Instance
138+
139+
If you are starting a new BookWyrm instance, you can go back to the setup instructions right now: [[Docker](install-prod.html)] [[Dockerless](install-prod-dockerless.html)]. Wenn nicht, lese weiter.
113140

114141
### Starte deine Instanz neu
115142

0 commit comments

Comments
 (0)