This guide walks you through setting up a self-hosted MinIO object storage solution on your Synology NAS using Container Manager. This provides a fully private, S3-compatible storage backend suitable for backend services like Flask, Node.js, or Python scripts.
-
Open File Station on your NAS
-
Create a folder:
- Name:
your_name - Location:
/volume1/minio_data - Permissions: Grant
read/writeto the appropriate user
- Name:
- Open Package Center
- Search for and install Container Manager
- Open Container Manager
- Go to the Registry tab and search for
minio - Download the official
minio/minio:latestimage
- Container Name:
minio - Image:
minio/minio:latest - Enable auto-restart: ✅
| Local Port | Container Port | Protocol |
|---|---|---|
| 9000 | 9000 | TCP |
| 9001 | 9001 | TCP |
| Host Path | Container Path | Read-Only |
|---|---|---|
/volume1/minio_data |
/data |
❌ No |
MINIO_ROOT_USER=your-access-keyMINIO_ROOT_PASSWORD=your-secret-key
server /data --console-address :9001- Use default bridge mode
-
Access the MinIO Console:
- URL:
http://<your-nas-ip>:9001
- URL:
-
Login using the access key and secret key defined earlier
-
Create a bucket:
- Example:
app-file
- Example:
-
Create an Access Key:
- Go to Access > Users and create a new user
- Save the generated Access & Secret keys securely
-
Assign a custom policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::app-file",
"arn:aws:s3:::app-file/*"
]
}
]
}import boto3
from botocore.client import Config
s3 = boto3.client(
's3',
endpoint_url='http://<your-nas-ip-address>:9000',
aws_access_key_id='your-access-key',
aws_secret_access_key='your-secret-access-key',
config=Config(signature_version='s3v4'),
region_name='us-east-1'
)
s3.upload_file(
Filename='test-image2.jpg',
Bucket='your-file',
Key='uploads/test-image2.jpg',
ExtraArgs={'ContentType': 'image/jpeg'}
)def get_presigned_image_url(image_key: str, expires_in: int = 86400) -> str:
return s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'your-file', 'Key': image_key},
ExpiresIn=expires_in
)
image_key = 'uploads/test-image.jpg'
url = get_presigned_image_url(image_key)
print("Image URL:", url)def delete_image(image_key: str):
response = s3.delete_object(Bucket='your-file', Key=image_key)
return response
image_key = "uploads/test-image.jpg"
delete_image(image_key)You can now:
- Store permanent image keys in your DB
- Upload, view, and delete images securely
- Avoid paying for AWS S3 or Azure Blob. Full private cloud on your NAS!