-
Notifications
You must be signed in to change notification settings - Fork 25
Description
I'm following the tutorial from https://docs.opennebula.io/7.0/software/installation_process/automatic_installation_with_onedeploy/one_deploy_tutorial_local_ds/#one-deploy-local
I have one front-end and two hipervisors and my example.yml file is the following:
# example.yml
---
all:
vars:
ansible_user: root
ansible_private_key_file: /home/ubuntu/.ssh/id_ed25519
ansible_python_interpreter: /usr/bin/python3
one_version: '7.0'
one_pass: opennebulapass
ansible_remote_tmp: /tmp/ansible-tmp
vn:
admin_net:
managed: true
template:
VN_MAD: 802.1Q
PHYDEV: enp75s0f2 # ensure this exists on the nodes; otherwise use ens*, enp* etc.
BRIDGE: br1091
VLAN_ID: 1091
AR:
TYPE: IP4
IP: 172.30.0.100
SIZE: 48
NETWORK_ADDRESS: 172.30.0.0
NETWORK_MASK: 255.255.255.0
GATEWAY: 172.30.0.1
DNS: 1.1.1.1
frontend:
hosts:
f1:
ansible_host: 192.168.123.47 # keep it if you like, not used by local
ansible_connection: local
ansible_python_interpreter: /usr/bin/python3
node:
hosts:
n1: { ansible_host: 172.27.13.247 }
n2: { ansible_host: 172.27.13.246 }
and my ansible.cfg is the following:
[defaults]
inventory=./example.yml
gathering=explicit
host_key_checking=false
display_skipped_hosts=true
retry_files_enabled=false
any_errors_fatal=true
stdout_callback=yaml
timeout=30
collections_paths=/home/ubuntu/one-deploy/ansible_collections
[ssh_connection]
pipelining=true
ssh_args=-q -o ControlMaster=auto -o ControlPersist=60s
[privilege_escalation]
become = true
become_user = root
After executing the ansible-playbook of the main, I get that there is connectivity between the front-end and the hipervisors:
PLAY RECAP
f1 : ok=75 changed=0 unreachable=0 failed=0 skipped=85 rescued=0 ignored=0
n1 : ok=37 changed=0 unreachable=0 failed=0 skipped=66 rescued=0 ignored=0
n2 : ok=36 changed=0 unreachable=0 failed=0 skipped=57 rescued=0 ignored=0
But when I try to download a new image:
onemarketapp export -d default 'Alpine Linux 3.17' alpine
I get the following error:
Error executing image transfer script: copying opennebula-frontend:/var/lib/one/datastores/1/bfe4ce8... see more details in VM log
I have checked the connectivity between the front-end and the hipervisors and they seem to be ok. I can ssh from the front-end to each hipervisor passwordless correctly. I tried accessing the VM logs but they just don't seem to exist:
oneadmin@opennebula-frontend:~$ onevm list
ID USER GROUP NAME STAT CPU MEM HOST TIME
5 oneadmin oneadmin alpine-5 fail 1 128M 172.27.13.246 0d 00h22
4 oneadmin oneadmin alpine-4 fail 1 128M 172.27.13.247 0d 00h29
oneadmin@opennebula-frontend:/var/log/one$ cat 5.log
Mon Sep 29 13:47:51 2025 [Z0][VM][I]: New state is ACTIVE
Mon Sep 29 13:47:51 2025 [Z0][VM][I]: New LCM state is PROLOG
Mon Sep 29 13:47:53 2025 [Z0][TrM][I]: Command execution failed (exit code: 2): /var/lib/one/remotes/tm/local/clone opennebula-frontend:/var/lib/one//datastores/1/bfe4ce81b1979b56bc11e455829f330f 172.27.13.246:/var/lib/one//datastores/0/5/disk.0 5 1
Mon Sep 29 13:47:53 2025 [Z0][TrM][I]: clone: Cloning /var/lib/one/datastores/1/bfe4ce81b1979b56bc11e455829f330f to /var/lib/one/datastores/0/5/disk.0
Mon Sep 29 13:47:53 2025 [Z0][TrM][I]: copying opennebula-frontend:/var/lib/one/datastores/1/bfe4ce81b1979b56bc11e455829f330f to 172.27.13.246:/var/lib/one/datastores/0/5/disk.0 from 172.27.13.246 (format: qcow2)
Mon Sep 29 13:47:53 2025 [Z0][TrM][E]: clone: copying opennebula-frontend:/var/lib/one/datastores/1/bfe4ce81b1979b56bc11e455829f330f to 172.27.13.246:/var/lib/one/datastores/0/5/disk.0 from 172.27.13.246 (format: qcow2)
Mon Sep 29 13:47:53 2025 [Z0][TrM][E]: clone: [STDOUT] "~/datastores/0/5 ~\n"
\ntar: This does not look like a tar archive\ntar: Exiting with failure status due to previous errors\n"ntend: Name or service not known
Mon Sep 29 13:47:53 2025 [Z0][TrM][E]: Error executing image transfer script: copying opennebula-frontend:/var/lib/one/datastores/1/bfe4ce81b1979b56bc11e455829f330f to 172.27.13.246:/var/lib/one/datastores/0/5/disk.0 from 172.27.13.246 (format: qcow2)
Mon Sep 29 13:47:53 2025 [Z0][VM][I]: New LCM state is PROLOG_FAILURE
Could you give me a hand with this please ? Thanks a lot for your support.