Skip to content

Conversation

@djurny
Copy link

@djurny djurny commented Jun 9, 2025

In case database.db cannot be identified as JSON content, rename the file 'out of the way' and start without a database.db.

I have been using this for a while now, as I suffered a lot from the database.db becoming corrupted due to either sudden powerloss or kernel hangups.

I'm not sure how I can spin a docker image based on this PR though...

In case database.db cannot be identified as JSON content,
rename the file 'out of the way' and start without a database.db.
@Nerivec
Copy link
Collaborator

Nerivec commented Jun 10, 2025

database.db is not a json file, it's newline-separated rows of json data.
Database stuff is handled in herdsman
https://github.com/Koenkk/zigbee-herdsman/blob/2da206da4b4376c22b192d4d15a6e4bef389e996/src/controller/database.ts#L19-L43
Sanity checks should be handled in that function (already have some).

@djurny
Copy link
Author

djurny commented Jun 10, 2025

Hi @Nerivec,
let me rephrase in that case:

In case database.db cannot be identified as New Line Delimited JSON text data, rename the file 'out of the way' and start without a database.db.

/app $ file data/database.db
data/database.db: New Line Delimited JSON text data
/app $

I'll close this PR and will check out the herdsman and see if that might benefit from this test (as I certainly benefited from it!).
Groetjes,

@djurny djurny closed this Jun 10, 2025
@djurny
Copy link
Author

djurny commented Jun 10, 2025

Hi @Nerivec,
This whole thing might be not needed anymore, after seeing fix: Improve loops performance PR#1130, where the entire database.db reading & parsing is in a try/catch construct.
Groetjes,

@djurny djurny deleted the feat/database-sanity-check branch June 10, 2025 03:31
@Nerivec
Copy link
Collaborator

Nerivec commented Jun 10, 2025

I mostly was worried about the mimetype detection you were using (since it's not supposed to be detected as json, looks like it's getting tricked because it's close-enough).

Do you happen to have a corrupted database.db laying around to confirm the json row parsing try/catch solves the problem? Not sure what your corruption scenario produces, I've never had the case (I only added the try/catch because it's safer). Currently, a bad row gets ignored, but we might be able to "fix" the row instead (or at least some well-defined cases).

@djurny
Copy link
Author

djurny commented Jun 10, 2025

Hi @Nerivec,
I checked my backups and could only find a zero size database.db, which I might have used myself to check if the mechanism works. Most of my troubles went away somewhere at the end of 2023 when I implemented some workaround (different filesystem, move from SDcard to USB drive, move from local to NFS to glusterfs), but due to my backup clearout schedule, I do not have any proper faulty files at hand anymore. From what I can recall is that the pattern was very similar if not identical to a 'regular' EXT powercut effect, having lines of NULs in the file or just an empty file altogether.
Sorry about that.
Groetjes,

@Nerivec
Copy link
Collaborator

Nerivec commented Jun 10, 2025

having lines of NULs in the file or just an empty file altogether

Sounds like the try/catch should take care of that then. It won't load these lines, so next save will clear them out of the .db. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants