Replies: 5 comments 10 replies
-
That parameter doesn't exist on a ZFS version that ancient. Try a newer one. |
Beta Was this translation helpful? Give feedback.
-
You may have to have a look at ZFS on disk format. This is the first link
in Google on this topic: http://www.giis.co.in/Zfs_ondiskformat.pdf
You may also want to put sparse images of the disks of the broken pool on
ZFS and snapshot them first before doing experiments on the data. You may
need a lot of extra space for that. Have a look at zdb command if you
didn't do that yet.
…On Sun, 15 Jan 2023, 13:13 Corey McGregor, ***@***.***> wrote:
Okay, could be a bit of a challenge, haha.
As I said, no data has been written to the accidentally added device -- at
least not outside of zpool interactions.
When a device is added to the pool, are all the blocks and their metadata
rewritten by the add (or attach?) subcommand to include the new vdev? Or is
the vdev only added to blocks with data on the new device?
If it is rewritten when the command runs to add the new device, I could
maybe reverse engineer that process from the source. Depending how complex
it is.
If it's not rewritten, perhaps there are a lot less blocks, but I'd still
need help identifying them..
Honestly, I may need help identifying a few things, like where in the
source the block and metadata format is specified, and where the CLI
command source links through to the code that actually writes to disk. I'm
not familiar at all with how kernel modules work (or the current state of C
development 😅), though I do have a bunch of time up my sleeve.
That being said, perhaps you're right and it's not worth the time versus
recreating the pool. Especially with only 3 people potentially running into
this scenario. I feel I've already taken too much of your time.
—
Reply to this email directly, view it on GitHub
<#14389 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQ6HJXX6DHTQ3VO7HQISTWSNMMFANCNFSM6AAAAAAT2KBRMU>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
-e is for exported pools.
…On Sun, Jan 15, 2023 at 6:33 PM Corey McGregor ***@***.***> wrote:
Hi @IvanVolosyuk <https://github.com/IvanVolosyuk>, thanks for the
direction 🙏🏻
I may be wrong, but it appears zdb unfortunately only works with
imported/importable pools? 🤷🏻 I tried a few options (-C, -d, -S from
the manpage examples) and they all responded with: "zdb: can't open 'tank':
No such file or directory"
The on disk format document looks quite helpful though, thanks!
—
Reply to this email directly, view it on GitHub
<#14389 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABUI7LGQMZHPOU34N4BETTWSSCMPANCNFSM6AAAAAAT2KBRMU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
You're aware zpool import -F -X is very destructive, right?
…On Sun, Jan 15, 2023 at 8:43 PM Corey McGregor ***@***.***> wrote:
Thanks, I'm currently waiting for zpool import -F -X to see if it can
tell me anything useful.
—
Reply to this email directly, view it on GitHub
<#14389 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABUI7PZBWVMO262MYR7DRLWSSRVLANCNFSM6AAAAAAT2KBRMU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
It would appear I was surprised to learn one cannot add a device to an existing raidz configuration though? Not to worry. Thanks for all the help 🙏🏻 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I've fallen victim to this mistake, and before researching more carefully how to go about rectifying it, I fell victim to this mistake and now have an unimportable pool with an unavailable device. Worse still, I had already erased the partition table in preparation to move data around to re-create the pool before realising my first mistake.
Is it at all possible to revert the configuration to it's state prior to the device being incorrectly added? Nothing was written to the new device, it was never used, so I'm hoping there might be a way to rollback the configuration or manually hack it by hand?
This is the current state of the pool:
Any attempt to import it with any combination of flags yields the same:
From the 2 discussions I referenced above, I could only try changing
zfs_max_missing_tvds
withecho 1 > /sys/module/zfs/parameters/zfs_max_missing_tvds
, but for some reason I cannot write to it 🤔I'm currently using a standard install of
[email protected]
installed viaapt
on[email protected]
-- which I see is significantly behind the current release. If there is anything to be gained in upgrading to help resolve this, please let me know.Thanks in advance 🙏🏻
Beta Was this translation helpful? Give feedback.
All reactions