Neutrino CloudSync is an open-source tool used to upload entire file folders from any host to any cloud.
Currently CloudSync offers integration with the following cloud storages:
- Amazon Simple Storage Service (S3)
And plans to add the following storages in a near future:
- Google Cloud Storage
- Microsoft Azure Blob Storage
- Google Drive
CloudSync comes with a CLI tool to execute operations while using cloud provider's Go SDK to interact with live
infrastructure.
- Go 1.18+
- Terraform
- AWS IAM user credentials configured with enough permissions to create/update:
- S3 bucket
- KMS key and alias
- IAM user
Optional
- Make
- AWS CLI
NOTE: CloudSync is able to use pre-existing cloud infrastructure if desired.
If that is the case, please consider provisioning a new IAM user with enough roles/policies to interact with the blob storage.
This repository contains fully-customizable Terraform code (IaaC) to provision required live infrastructure
in your own cloud account.
The code may be found here.
To run this code and provision your own infrastructure, you MUST have installed Terraform CLI in your admin
host machine (not actual nodes which will interact with the cloud storage). Furthermore, an S3 bucket and a DynamoDB
table is REQUIRED to persist terraform states remotely (S3) and lock/unlock a remote mutex lock mechanism (DynamoDB),
enabling collaboration between multiple developers and hence development-purpose machines.
If this functionality is not desired, please remove the main.tf's terraform block and leave it like this:
terraform {
}The following steps are specific for the Amazon Web Services (AWS) cloud provider:
- Go to deployments/terraform/workspaces/development.
- Add a
terraform.tfvarsfile with the following variables (replace with actual cloud account data):
aws_account = "0000"
aws_region = "us-east-N"
aws_access_key = "XXXX"
aws_secret_key = "XXXX"
*The IAM user provided requires enough permissions to provision S3 buckets, IAM users and KMS keys with their alias.
- OPTIONAL: Modify variables from
variables.tffile as desired to configure your infrastructure properties. - Run the Terraform command
terraform planand verify a blob bucket and an encryption key will be created. - Run the Terraform command
terraform applyand writeyesafter reviewing all resources to be created. - OPTIONAL: Go to the GUI cloud console (or use the cloud CLI) and verify all resources have been created with their proper configurations.
NOTE: At this time, the deployed infrastructure will get tagged and named using development stage. This may be
removed through Terraform files, more specifically in the main.tf file from development workspace folder.
Neutrino CloudSync compiled binaries are available on the releases page (under Assets dropdown).
Download the binary file according your machine OS (Operating System, e.g. Windows, Mac/Darwin or Linux) and CPU architecture (amd64, arm64).
The binary file is a CLI program as a matter of fact, so it MUST be run within a terminal.
Example:
Linux/Darwin
user@machine:~ ./cloudsync -hWindows (Powershell)
PS C:\Users\aruizeac> .\cloudsync.exe -hThe binary file may be used as a standalone executable. Nevertheless, there is the option to install the executable so it may run anywhere using a terminal.
In order to achieve this, move the binary file to user's homepath:
Linux/Darwin: /home/{USERNAME}/.cloudsync
Windows: C:\Users\{USERNAME}\.cloudsync
Finally, add the previous path to the $PATH environment variable.
After compeleting all previous steps, the CLI application may be run like this:
Linux/Darwin
user@machine:~ cloudsync -hWindows (Powershell)
PS C:\Users\aruizeac> cloudsync -hNotice the program does not require .exe nor ./ characters anymore.
CloudSync will create a new configuration file when running an operation (e.g. upload command).
This file will be created under user's homepath inside a folder named .cloudsync. (/home/{USERNAME}/.cloudsync in Linux/Darwin, C:\Users\{USERNAME}\.cloudsync in Windows).
*DO NOT forget to enable view secret files/folders feature to see this folder.
| Field | Type | Description |
|---|---|---|
| cloud.region | string | Infrastructure region location (e.g. us-east-1, us-west-2, eu-central-1) |
| cloud.bucket | string | Blob storage bucket name |
| cloud.access_key | string | Cloud account access key used to interact with infrastructure |
| cloud.secret_key | string | Cloud account access secret key used to interact with infrastructure |
| scanner.partition_id | string | Identifier used to shard data within the blob storage (auto-generated using ULID and might represent a machine ID) |
| scanner.read_hidden | boolean | Enable scanning for hidden files |
| scanner.deep_traversing | boolean | Enable scanning for child paths |
| scanner.ignored_keys | string list | File or folder names to be ignored by scanner (accepts wildcard patterns, e.g. *.go, *.java) |
| scanner.log_errors | boolean | Enable error logging |
Run the upload command:
user@machine:~ cloudsync upload -d STORAGE_DRIVER -p DIRECTORY_TO_SYNCFor more information about the upload command, please run:
user@machine:~ cloudsync upload -hRun the cli program using Go and execute upload command:
user@machine:~ go run ./cmd/cli/main.go upload -d STORAGE_DRIVER -p DIRECTORY_TO_SYNC