-
Notifications
You must be signed in to change notification settings - Fork 309
SODA Projects Cluster Installation through Ansible
This document describes how to install an SODA projects local cluster, including Hotpot, Gelato, Telemetry, Orchestration, and Dashboard components.
All the installation work is tested on Ubuntu 16.04, please make sure you have installed the right one. Also root user is REQUIRED before the installation work starts.
- packages
Install following packages:
apt-get update && apt-get install -y git make curl wget libltdl7 libseccomp2 libffi-dev gawk- docker
Install docker:
wget https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_18.06.1~ce~3-0~ubuntu_amd64.deb
dpkg -i docker-ce_18.06.1~ce~3-0~ubuntu_amd64.debInstall docker-compose:
curl -L "https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose- golang
Check golang version information (v1.12.x):
root@proxy:~# go version
go version go1.12.1 linux/amd64You can install golang by executing commands below:
wget https://storage.googleapis.com/golang/go1.12.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.12.1.linux-amd64.tar.gz
echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile
echo 'export GOPATH=$HOME/gopath' >> /etc/profile
source /etc/profilegit clone https://github.com/sodafoundation/installer.git
cd installer/ansible
# Checkout the latest stable release. Current stable release: stable/elba. If you want to get the master branch of all components, you can skip this step. (Attn: Master may not be stable or tested fully)
git checkout v0.12.0To install ansible, run the commands below:
# This step is needed to upgrade ansible to version 2.4.2 which is required for the "include_tasks" ansible command.
chmod +x ./install_ansible.sh && ./install_ansible.sh
ansible --version # Ansible version 2.4.x is required.Firstly you need to modify host_ip in group_vars/common.yml, and you can specify which project (hotpot or gelato) to be deployed:
# This field indicates local machine host ip
host_ip: 127.0.0.1
# This field indicates which project should be deploy
# 'hotpot', 'gelato' or 'all'
deploy_project: allIf you want to integrate SODA hotpot with k8s csi, please modify nbp_plugin_type to csi in group_vars/sushi.yml:
# 'hotpot_only' is the default integration way, but you can change it to 'csi'
# or 'flexvolume'
nbp_plugin_type: hotpot_onlyIf lvm is chosen as storage backend, modify group_vars/osdsdock.yml:
enabled_backends: lvmIf nfs is chosen as storage backend, modify group_vars/osdsdock.yml:
enabled_backends: nfsIf ceph is chosen as storage backend, modify group_vars/osdsdock.yml:
enabled_backends: ceph # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'.Configure group_vars/ceph/all.yml with an example below:
ceph_origin: repository
ceph_repository: community
ceph_stable_release: luminous # Choose luminous as default version
public_network: "192.168.3.0/24" # Run 'ip -4 address' to check the ip address
cluster_network: "{{ public_network }}"
monitor_interface: eth1 # Change to the network interface on the target machine
devices: # For ceph devices, append ONE or MULTIPLE devices like the example below:
#- '/dev/sda' # Ensure this device exists and available if ceph is chosen
#- '/dev/sdb' # Ensure this device exists and available if ceph is chosen
osd_scenario: collocatedIf cinder is chosen as storage backend, modify group_vars/osdsdock.yml:
enabled_backends: cinder # Change it according to the chosen backend. Supported backends include 'lvm', 'ceph', and 'cinder'
# Use block-box install cinder_standalone if true, see details in:
use_cinder_standalone: trueConfigure the auth and pool options to access cinder in group_vars/cinder/cinder.yaml. Do not need to make additional configure changes if using cinder standalone.
NOTE : Please ensure that you are using OpenSDS version >= v0.6.1.
Update the file ansible/group_vars/telemetry.yml and change the value of enable_telemetry_tools to true
# Do you need to install or clean up telemetry tools?
enable_telemetry_tools: falseUpdate the file ansible/group_vars/orchestration.yml and change the value of enable_orchestration to true
# Install Orchestration Manager (true/false)
enable_orchestration: falseThe HOST_IP environment variable has to be set to your local machine IP address
export HOST_IP={your_real_host_ip}
echo $HOST_IPansible all -m ping -i local.hostsansible-playbook site.yml -i local.hosts
# You can use the -vvv option to enable verbose display and debug mode.
ansible-playbook site.yml -i local.hosts -vvvFirstly configure SODA projects CLI tool:
sudo cp /opt/opensds-hotpot-linux-amd64/bin/osdsctl /usr/local/bin/
export OPENSDS_ENDPOINT=http://{your_real_host_ip}:50040
export OPENSDS_AUTH_STRATEGY=keystone
export OS_AUTH_URL=http://{your_real_host_ip}/identity
export OS_USERNAME=admin
export OS_PASSWORD=opensds@123
export OS_TENANT_NAME=admin
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_ID=default
osdsctl pool list # Check if the pool resource is availableThen create a default profile:
osdsctl profile create '{"name": "default", "description": "default policy", "storageType": "block"}'
Create a volume:
osdsctl volume create 1 --name=test-001
List all volumes:
osdsctl volume list
Delete the volume:
osdsctl volume delete <your_volume_id>
Create a default profiles
osdsctl profile create '{"name":"default_fileshare", "description":"default policy for fileshare", "storageType":"file"}'
Create a Fileshare
osdsctl fileshare create 1 -n "test_fileshare" -p <profile_id>
List all fileshare
osdsctl fileshare list
Delete the Fileshare
osdsctl fileshare delete <fileshare id>
SODA Dashboard UI dashboard is available at http://{your_host_ip}:8088, please login the dashboard using the default admin credentials: admin/opensds@123. Create tenant, user, and profiles as admin. Multi-Cloud service is also supported by dashboard.
Logout of the dashboard as admin and login the dashboard again as a non-admin user to manage storage resource:
- Create volume
- Create snapshot
- Expand volume size
- Create volume from snapshot
- Create volume group
- Create fileshare
- Create snapshot
- Set access permission on fileshare (ip based access permissions are allowed)
- Register object storage backend
- Create bucket
- Upload object
- Download object
- Migrate objects based on bucket across cloud
- Create lifecycle for buckets
We would be grateful if you could report issues when you find some bug or issues.
ansible-playbook clean.yml -i local.hosts
# You can use the -vvv option to enable verbose display and debug mode.
ansible-playbook clean.yml -i local.hosts -vvvcd /opt/ceph-ansible
sudo ansible-playbook infrastructure-playbooks/purge-cluster.yml -i ceph.hostsIn addition, clean up the logical partition on the physical block device used by ceph, using the fdisk tool.
sudo rm -rf /opt/ceph-ansible