Evenly distribuite the data amount many directories! #4560
gilbertoferreira
started this conversation in
General
Replies: 1 comment 1 reply
-
You can continue without doing rebalance as well. If you do rebalance, there might be a performance impact during the rebalance window. The disks won't be read only. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there..
I have two servers with the follow disk layout:
/dev/sdc1 1,8T 1,6T 178G 91% /data1
/dev/sdd1 1,8T 1,6T 175G 91% /data2
/dev/sdb1 1,8T 225G 1,6T 13% /data3
/dev/sdf1 1,8T 108G 1,7T 7% /data4
/dev/sdg1 1,8T 108G 1,7T 7% /data5
/dev/sdh1 1,8T 105G 1,7T 6% /data6
All this directory data1....data6 are used to create a gluster distribuite-replicate volume.
As you can see, the /data1 and /data2 are almost full.
Question is:
Can I run gluster vol rebalance VOL start to fix this?
I meant, to distribute the data between all the other directoris evenly?
Initially I have had 3 directories in both servers, like:
server1|server2
/data1
/data2
/data3
Sometime after that, like 6 months after create the glusterfs volumes, I added more 3 disks in to both servers.
Now that's how it is:
server1|server2
/data1
/data2
/data3
/data4
/data5
/date6
I never perform any rebalance or fix layout, because never needed.
Now that 2 directories are nearly full, this situation comes up!
Is there any impact, if I took too long to perform a rebalance?
Honestly, I never know if a rebalance is needed when adding new bricks.
This should be in the gluster add-brick command output, something like this: You'll need to rebalance of fix the layout of you volume, after add-brick.
My worry is if the vm disks used will be read-only during the rebalance procedure.
I have more than 20 VM with Linux and Windows, using Proxmox VE qemu.
Best regards.
Beta Was this translation helpful? Give feedback.
All reactions