Replies: 3 comments 1 reply
-
Any ideas on this? @scineram I noticed the thumbs down. I'm curious why. Thanks, Steve |
Beta Was this translation helpful? Give feedback.
-
Hmm... I'm not sure how this was possible:
When I do the same thing with zfs-2.1.15, I get:
I can't even force it:
My only thoughts:
|
Beta Was this translation helpful? Give feedback.
-
This has been resolved using "detach" a couple of times. I tried a number of things and finally tried:
This got rid of the "old" entry in the "replacing-9" section so that there then were just Slot_30 and Slot_46 in it and Slot_30 was also still listed in raidz2-3. I then tried it again and it got rid of the "replacing-9" section all together and raidz2-2 was "ONLINE" with all 10 drives "ONLINE" and Slot_30 now listed just once as OFFLINE in raidz2-3. Finally, I replaced Slot_30 with itself with:
and it is resilvering. All looks to be back where it should be. Steve |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
On a CentOS 7 (I know... we are migrating...) system with zfs-2.0.7-1 I have a pool made up of a number of 10-drive raidz2s. It is at a remote location and I noticed that raidz2-2 had two bad drives. The system didn't have any spare drives available and the local technician was not going to be available for a few days so I decided to take one drive offline from one raidz2 group and use that to replace a drive in the one that had two bad drives. What happened though, was that the drive ended up being listed in both raidz2s. The disk is called "Slot_30". Once I had more redundancy, I tried to fix it by offlining Slot_30 and replacing it with another drive: Slot_46. That's where it stands but the layout is even more strange, with a "replacing-9" section that has three devices, two of which are OFFLINE. resilvering has completed and I scrubbed and rebooted, hoping that this would clear things up. It hasn't.
The original bad drive that I think is now labeled as "old" was Slot_29. Here is what I see in these groups:
The "**" marks are an attempt to highlight Slot_30 in both groups but it doesn't seem to work in a "code" section.
This is what I see in "zpool history"
I obviously did something wrong. Right now, I'd like to try to get it back to where it should be but I'd also like to figure out what I did wrong so it doesn't happen again.
If anyone has any ideas, please let me know.
Thanks,
Steve
Beta Was this translation helpful? Give feedback.
All reactions