Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -22,157 +22,106 @@ This can be useful for many reasons, such as:

Configuring Magento 2 to start storing files in your bucket is done using a single command.

**AWS S3**
**Hypernode Object Storage and other S3 compatible providers**

If you're using Hypernode Object Storage or a different provider than AWS S3, you need to specify the `--remote-storage-endpoint` option.

```bash
bin/magento setup:config:set \
--remote-storage-driver="aws-s3" \
--remote-storage-bucket="my_bucket_name" \
--remote-storage-region="my-aws-region" \
--remote-storage-region="provider-region" \
--remote-storage-key="abcd1234" \
--remote-storage-secret="abcd1234"
--remote-storage-secret="abcd1234" \
--remote-storage-endpoint="https://my-s3-compatible.endpoint.com"
```

**Other S3 compatible providers**
In the case of Hypernode Object Storage you can get the relevant information by running `hypernode-object-storage info` with the `--with-credentials` flag:

```console
app@testapp ~ # hypernode-object-storage info --with-credentials
+--------------------------------------+----------------+---------+-------------+-------------------------------------+---------------+---------------+
| UUID | Name | Plan | Hypernodes | Management URL | Access Key | Secret Key |
+--------------------------------------+----------------+---------+-------------+-------------------------------------+---------------+---------------+
| 12345678-9012-3456-b7e3-19ab43df4a23 | testappbucket1 | OS200GB | testapp | https://example.ams.objectstore.eu | abcd1234 | abcd1234 |
+--------------------------------------+----------------+---------+-------------+-------------------------------------+---------------+---------------+
```

If you're using a different provider than AWS S3, you need to specify the `--remote-storage-endpoint` option.
**AWS S3**

```bash
bin/magento setup:config:set \
--remote-storage-driver="aws-s3" \
--remote-storage-bucket="my_bucket_name" \
--remote-storage-region="provider-region" \
--remote-storage-region="my-aws-region" \
--remote-storage-key="abcd1234" \
--remote-storage-secret="abcd1234" \
--remote-storage-endpoint="https://my-s3-compatible.endpoint.com"
--remote-storage-secret="abcd1234"
```

## Syncing the files
## Syncing the files (efficiently)

Instead of running (which is Magento's official way to do this):
Magento provides an official method for syncing files using the following command (not recommended):

```bash
bin/magento remote-storage:sync
```

One can run the following instead to really speed up the process:
However, for better performance, you can use the following alternative:

```bash
hypernode-object-storage objects sync pub/media/ s3://my_bucket_name/media/
hypernode-object-storage objects sync var/import_export s3://my_bucket_name/import_export
```

The `hypernode-object-storage objects sync` command runs the sync process in the background
and provides the Process ID (PID). You can monitor the sync progress using:

```bash
hypernode-object-storage objects show PID
```

Alternatively, you can use the AWS CLI directly:

```bash
aws s3 sync pub/media/ s3://my_bucket_name/media/
aws s3 sync var/import_export s3://my_bucket_name/import_export
```

This is much faster than Magento's built-in sync, because `aws s3 sync` uploads files concurrently.
Both methods are significantly faster than Magentos built-in sync, as aws s3 sync handles uploads concurrently.

## The storage flag file in the bucket

Magento's S3 implementation creates a test file called `storage.flag`, which is basically created to test if the connection works. So this is not a magic file to mark anything ([source](https://github.com/magento/magento2/blob/6f4805f82bb7511f72935daa493d48ebda3d9039/app/code/Magento/AwsS3/Driver/AwsS3.php#L104)).

## Serving assets from your S3 bucket

To start serving media assets from your S3 bucket, you need to make some adjustments to your nginx configuration. Create the following file at `/data/web/nginx/example.com/server.assets.conf` for each relevant vhost:

```nginx
set $backend "haproxy";

location @object_storage_fallback {
# Proxy to object storage
set $bucket "my_bucket_name";
proxy_cache_key "$bucket$uri";
proxy_cache_valid 200 302 7d;
proxy_cache_methods GET HEAD;
proxy_cache_background_update on;
proxy_cache_use_stale updating;
proxy_cache asset_cache;
resolver 8.8.8.8;
proxy_pass https://$bucket.s3.amazonaws.com$uri;
proxy_pass_request_body off;
proxy_pass_request_headers off;
proxy_intercept_errors on;
proxy_hide_header "x-amz-id-2";
proxy_hide_header "x-amz-request-id";
proxy_hide_header "x-amz-storage-class";
proxy_hide_header "x-amz-server-side-encryption";
proxy_hide_header "Set-Cookie";
proxy_ignore_headers "Set-Cookie";
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
add_header X-Cache-Status $upstream_cache_status;
expires +1y;

# If object storage fails, fallback to PHP handler
error_page 404 = @asset_fallback;
error_page 403 = @asset_fallback;
}

location @php_asset_fallback {
# Handle with phpfpm
rewrite ^/media /get.php?$args last;
rewrite ^/static/(version\d*/)?(.*)$ /static.php?resource=$2 last;
echo_exec @phpfpm;
}
To serve media assets directly from your S3 bucket, you need to adjust your Nginx configuration.
Fortunately, `hypernode-manage-vhosts` simplifies this process for you.
If you're using Hypernode's object storage solution, simply run the following command for the relevant vhosts:

location @haproxy {
# Handle with haproxy
include /etc/nginx/proxy_to_haproxy.conf;
proxy_pass http://127.0.0.1:8080;
}

location @asset_fallback {
try_files "" $asset_fallback_handler;
}

location ~ ^/static/ {
expires max;

# Remove signature of the static files that is used to overcome the browser cache
location ~ ^/static/version\d*/ {
rewrite ^/static/version\d*/(.*)$ /static/$1 last;
}
```bash
hmv example.com --object-storage
```

location ~* \.(ico|jpg|jpeg|png|gif|svg|svgz|webp|avif|avifs|js|css|eot|ttf|otf|woff|woff2|html|json|webmanifest)$ {
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
expires +1y;
### Using a custom object storage solution

try_files $uri $uri/ @asset_fallback;
}
location ~* \.(zip|gz|gzip|bz2|csv|xml)$ {
add_header Cache-Control "no-store";
add_header X-Frame-Options "SAMEORIGIN";
expires off;
If you're using a custom storage provider, such as Amazon S3, you'll need to specify the bucket name and URL manually:

try_files $uri $uri/ @asset_fallback;
}
try_files $uri $uri/ @asset_fallback;
add_header X-Frame-Options "SAMEORIGIN";
}
```bash
hmv example.com --object-storage --object-storage-bucket mybucket --object-storage-url https://example_url.com
```

location /media/ {
try_files $uri $uri/ @asset_fallback;
### Switching back to Hypernode defaults

location ~ ^/media/theme_customization/.*\.xml {
deny all;
}
If you previously set a custom bucket and URL but want to revert to Hypernode's default object storage, use the `--object-storage-defaults` flag:

location ~* \.(ico|jpg|jpeg|png|gif|svg|svgz|webp|avif|avifs|js|css|swf|eot|ttf|otf|woff|woff2)$ {
add_header Cache-Control "public";
add_header X-Frame-Options "SAMEORIGIN";
expires +1y;
try_files $uri $uri/ @object_storage_fallback;
}
location ~* \.(zip|gz|gzip|bz2|csv|xml)$ {
add_header Cache-Control "no-store";
add_header X-Frame-Options "SAMEORIGIN";
expires off;
try_files $uri $uri/ @object_storage_fallback;
}
add_header X-Frame-Options "SAMEORIGIN";
}
```bash
hmv example.com --object-storage-defaults
```

Make sure to change the string `my_bucket_name` to the name of your bucket and keep in mind that your bucket URL might be different depending on your AWS region. For example, you might need to change it from `https://$bucket.s3.amazonaws.com$uri` to `https://s3.amazonaws.com/$bucket$uri` instead.
Furthermore, ensure that your S3 bucket policies are configured correctly, so that only `/media` is publicly readable. For example:
### Configuring Amazon S3 bucket policies

If you’re using Amazon S3, ensure that your S3 bucket policies are properly configured so that only the `/media` directory is publicly accessible. For example:

```json
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,56 @@ app@testhypernode ~ # hypernode-object-storage info

You can use the credentials and the URL now to configure remote storage for your application with the help of [this document](../../ecommerce-applications/magento-2/how-to-configure-remote-storage-for-magento-2-x.md).

### Managing objects in object storage

You can manage your objects using the `hypernode-object-storage objects` subcommand.
It supports all common operations--listing, copying, moving, and deleting files--while also allowing you to sync files in the background and monitor the progress of an ongoing sync.

```console
app@testhypernode ~ # hypernode-object-storage objects --help
usage: hypernode-object-storage objects [-h] {sync,cp,ls,mv,rm,show} ...

Manage objects in object storage

positional arguments:
{sync,cp,ls,mv,rm,show}
sync Synchronize files between a local directory and an object storage location
cp Copy a file or object from one location to another
ls List objects in an S3 bucket or folder
mv Move or rename a file or object
rm Delete an object from S3
show Display the current status of an ongoing sync process

options:
-h, --help show this help message and exit
```

It is important to note that `hypernode-object-storage objects` supports all optional flags available in `aws s3`, allowing you to customize its behavior for advanced configurations.

#### Syncing files and monitoring progress

Syncing files between your local directory and object storage is simple. Run the following command:

```console
app@testhypernode ~ # hypernode-object-storage objects sync /example/local/path/ s3://example/bucket/uri/
Syncing objects from /example/local/path/ to s3://example/bucket/uri/...
Sync process started with PID 1234 in the background.
```

The `sync` operation runs in the background, and you can monitor its progress by using the `show` command, for example:

```console
app@testhypernode ~ # hypernode-object-storage objects show 1234
Completed 9.7 GiB/~30.0 GiB (118.2 MiB/s) with ~5 file(s) remaining (calculating...)
```

If you run the `show` command after the sync operation has finished, you’ll see output like this:

```console
app@testhypernode ~ # hypernode-object-storage objects show 1234
Process 1234 does not exist anymore
```

## UI option - Control Panel

Coming soon