Skip to content
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,11 @@ This can be useful for many reasons, such as:

Configuring Magento 2 to start storing files in your bucket is done using a single command.

**Hypernode Object Storage**
```bash
something something?
```
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does the command look for if using openstack's object storage?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which command?
hypernode-object-storage --help?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no no, this is the magento2 command to make it use object storage


**AWS S3**

```bash
Expand All @@ -47,132 +52,67 @@ bin/magento setup:config:set \
--remote-storage-endpoint="https://my-s3-compatible.endpoint.com"
```

## Syncing the files
## Syncing the files (efficiently)

Instead of running (which is Magento's official way to do this):
Magento provides an official method for syncing files using the following command (not recommended):

```bash
bin/magento remote-storage:sync
```
However, for significantly improved performance, you can use the following alternative:

One can run the following instead to really speed up the process:
```bash
hypernode-object-storage objects sync pub/media/ s3://my_bucket_name/media/
hypernode-object-storage objects sync var/import_export s3://my_bucket_name/import_export
```

The `hypernode-object-storage objects sync` command runs the sync process in the background
and provides the Process ID (PID). You can monitor the sync progress using:

```bash
hypernode-object-storage objects show PID
```

Alternatively, you can use the AWS CLI directly:

```bash
aws s3 sync pub/media/ s3://my_bucket_name/media/
aws s3 sync var/import_export s3://my_bucket_name/import_export
```

This is much faster than Magento's built-in sync, because `aws s3 sync` uploads files concurrently.
Both methods are significantly faster than Magento’s built-in sync, as aws s3 sync handles uploads concurrently.

## The storage flag file in the bucket

Magento's S3 implementation creates a test file called `storage.flag`, which is basically created to test if the connection works. So this is not a magic file to mark anything ([source](https://github.com/magento/magento2/blob/6f4805f82bb7511f72935daa493d48ebda3d9039/app/code/Magento/AwsS3/Driver/AwsS3.php#L104)).

## Serving assets from your S3 bucket

To start serving media assets from your S3 bucket, you need to make some adjustments to your nginx configuration. Create the following file at `/data/web/nginx/example.com/server.assets.conf` for each relevant vhost:

```nginx
set $backend "haproxy";

location @object_storage_fallback {
# Proxy to object storage
set $bucket "my_bucket_name";
proxy_cache_key "$bucket$uri";
proxy_cache_valid 200 302 7d;
proxy_cache_methods GET HEAD;
proxy_cache_background_update on;
proxy_cache_use_stale updating;
proxy_cache asset_cache;
resolver 8.8.8.8;
proxy_pass https://$bucket.s3.amazonaws.com$uri;
proxy_pass_request_body off;
proxy_pass_request_headers off;
proxy_intercept_errors on;
proxy_hide_header "x-amz-id-2";
proxy_hide_header "x-amz-request-id";
proxy_hide_header "x-amz-storage-class";
proxy_hide_header "x-amz-server-side-encryption";
proxy_hide_header "Set-Cookie";
proxy_ignore_headers "Set-Cookie";
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Cache-Status $upstream_cache_status;
expires +1y;

# If object storage fails, fallback to PHP handler
error_page 404 = @asset_fallback;
error_page 403 = @asset_fallback;
}

location @php_asset_fallback {
# Handle with phpfpm
rewrite ^/media /get.php?$args last;
rewrite ^/static/(version\d*/)?(.*)$ /static.php?resource=$2 last;
echo_exec @phpfpm;
}

location @haproxy {
# Handle with haproxy
include /etc/nginx/proxy_to_haproxy.conf;
proxy_pass http://127.0.0.1:8080;
}

location @asset_fallback {
try_files "" $asset_fallback_handler;
}

location ~ ^/static/ {
expires max;
To serve media assets directly from your S3 bucket, you need to adjust your Nginx configuration.
Fortunately, `hypernode-manage-vhosts` simplifies this process for you. If you're using Hypernode's object storage solution,
simply run the following command for the relevant vhosts:

# Remove signature of the static files that is used to overcome the browser cache
location ~ ^/static/version\d*/ {
rewrite ^/static/version\d*/(.*)$ /static/$1 last;
}

location ~* \.(ico|jpg|jpeg|png|gif|svg|svgz|webp|avif|avifs|js|css|eot|ttf|otf|woff|woff2|html|json|webmanifest)$ {
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
expires +1y;
```bash
hmv example.com --object-storage
```
### Using a custom object storage solution

try_files $uri $uri/ @asset_fallback;
}
location ~* \.(zip|gz|gzip|bz2|csv|xml)$ {
add_header Cache-Control "no-store";
add_header X-Frame-Options "SAMEORIGIN";
expires off;
If you're using a custom storage provider, such as Amazon S3, you'll need to specify the bucket name and URL manually:

try_files $uri $uri/ @asset_fallback;
}
try_files $uri $uri/ @asset_fallback;
add_header X-Frame-Options "SAMEORIGIN";
}
```bash
hmv example.com --object-storage --object-storage-bucket mybucket --object-storage-url https://example_url.com
```

location /media/ {
try_files $uri $uri/ @asset_fallback;
### Switching back to Hypernode defaults

location ~ ^/media/theme_customization/.*\.xml {
deny all;
}
If you previously set a custom bucket and URL but want to revert to Hypernode's default object storage, use the `--object-storage-defaults` flag:

location ~* \.(ico|jpg|jpeg|png|gif|svg|svgz|webp|avif|avifs|js|css|swf|eot|ttf|otf|woff|woff2)$ {
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
expires +1y;
try_files $uri $uri/ @object_storage_fallback;
}
location ~* \.(zip|gz|gzip|bz2|csv|xml)$ {
add_header Cache-Control "no-store";
add_header X-Frame-Options "SAMEORIGIN";
expires off;
try_files $uri $uri/ @object_storage_fallback;
}
add_header X-Frame-Options "SAMEORIGIN";
}
```bash
hmv example.com --object-storage-defaults
```

Make sure to change the string `my_bucket_name` to the name of your bucket and keep in mind that your bucket URL might be different depending on your AWS region. For example, you might need to change it from `https://$bucket.s3.amazonaws.com$uri` to `https://s3.amazonaws.com/$bucket$uri` instead.
Furthermore, ensure that your S3 bucket policies are configured correctly, so that only `/media` is publicly readable. For example:
### Configuring Amazon S3 bucket policies

If you’re using Amazon S3, ensure that your S3 bucket policies are properly configured so that only the `/media` directory is publicly accessible. For example:

```json
{
Expand Down
Loading