Skip to content

Conversation

@Moin2002-tech
Copy link
Collaborator

So, the reason the app is crashing is actually a classic race condition. The best way to handle this is to put the lock inside the main importFromFile_Json function itself, rather than just in the async one. Even though asyncImportFromFile_Json is what kicks off the new thread, it’s really just a wrapper. If we only put the lock there, we’d still be in trouble if we ever called the regular import directly while an async task was already running in the background.

By adding std::lock_guardstd::mutex lock(mutex_); right at the start of importFromFile_Json, we make sure that no matter how it gets called, it always waits its turn before touching shared data like the items map or the undoHistory.

Looking at the logs, it seems like the crash happened because one thread was trying to clear the items while another was right in the middle of adding "beta" or "alpha." They ended up stepping on each other's toes in memory, which is a total recipe for a segfault. If we place the lock right after the JSON file is read but before we do items.clear(), we're golden. This way, we don't block other threads while the computer is doing the slow work of reading from the disk, but we're fully protected the second we start changing the actual manager state. It’s the same logic used in addItem, so it keeps everything consistent and stops those random crashes during the threaded tests!

So, the reason the app is crashing is actually a classic race condition. The best way to handle this is to put the lock inside the main importFromFile_Json function itself, rather than just in the async one. Even though asyncImportFromFile_Json is what kicks off the new thread, it’s really just a wrapper. If we only put the lock there, we’d still be in trouble if we ever called the regular import directly while an async task was already running in the background.

By adding std::lock_guard<std::mutex> lock(mutex_); right at the start of importFromFile_Json, we make sure that no matter how it gets called, it always waits its turn before touching shared data like the items map or the undoHistory.

Looking at the logs, it seems like the crash happened because one thread was trying to clear the items while another was right in the middle of adding "beta" or "alpha." They ended up stepping on each other's toes in memory, which is a total recipe for a segfault. If we place the lock right after the JSON file is read but before we do items.clear(), we're golden. This way, we don't block other threads while the computer is doing the slow work of reading from the disk, but we're fully protected the second we start changing the actual manager state. It’s the same logic used in addItem, so it keeps everything consistent and stops those random crashes during the threaded tests!
Fixed missing semicolon in lock_guard initialization.
@gem870
Copy link
Owner

gem870 commented Jan 10, 2026

Okay, but in my own section, everything is running smoothly with a heavy test suite running on it. I think you may be accessing incorrectly. I would like to have screenshots of the steps you took.

And I hope you are working with the updated file.
Please, it is important that if you figure out a fault, you raise it as an issue so that if approved, you can go ahead to fix it.

Sorry for taking too long to reply, I was really engaged.

@Moin2002-tech
Copy link
Collaborator Author

Moin2002-tech commented Jan 10, 2026

The reason app crashes only occasionally is that we are dealing with a "race condition," which is essentially a game of digital musical chairs. When you run the test, multiple threads are rushing to update the same list of items or register types at the exact same microsecond. Most of the time, the CPU manages to sequence them just far enough apart that they don't collide, and the program runs fine. However, every so often, two threads try to overwrite the exact same piece of memory at once, or one thread moves a list in memory while the other is still trying to read it. This creates a "collision" that confuses the system, causing it to panic and shut the program down with a segmentation fault to prevent further corruption.

@Moin2002-tech
Copy link
Collaborator Author

Moin2002-tech commented Jan 10, 2026

Screenshot from 2026-01-10 13-41-00 I got this error many time during the Testing the project.

@Moin2002-tech
Copy link
Collaborator Author

Please merge this into the main repository once testing is complete. I have confirmed that the issue was not with the Google Test Suite, but rather a race condition caused by two threads accessing the same memory location simultaneously. The inconsistency in previous runs was due to thread scheduling; the segmentation fault only triggers when the threads overlap.

@gem870
Copy link
Owner

gem870 commented Jan 13, 2026

Oh dear! I was able to see the error you are talking about.
On the test suite, there are some tests that are written to fail as passed tests. I use them to check for edge cases in the programme. When you see some of those errors displayed, it's not that they failed, but it's part of the test to ensure that there are no memory leaks that will break down the system.
If you keep scrolling, you will see multiple of those tests throwing errors. So, what you will look out for is to check if the tests are passed at the end of the programme; if one fails, the specific test suite will be listed for you to know the exact test that failed.

In my own case, all were passed.
testing

I would like to know the branch you're working with because you might be having issues on the branches and not the main.
The branches are currently on update, so you will see so many issues. We will have to take them carefully so that we don't destroy the system with irresistible bugs that are challenging.

@Moin2002-tech
Copy link
Collaborator Author

Moin2002-tech commented Jan 13, 2026

If you run ./TestApp multiple times, you'll start to see it crash.
By the way, that screenshot I shared isn't from my local machine; it’s actually a build error from the GitHub Actions CI on the main branch. You can see the failure details at the top of the logs.

@gem870
Copy link
Owner

gem870 commented Jan 13, 2026

multi1 Singleplatfm

The platform that failed, I didn't set up for that.
I want to know the exact branch you are working with that will help to solve the problem.

@Moin2002-tech
Copy link
Collaborator Author

The main branch

@gem870
Copy link
Owner

gem870 commented Jan 13, 2026

The main branch is working, and that is the first release of this project.
From the shots I provided, you would see that all test cases are passed, so I don't know where your own error is coming from.

I would advise you to clone the repo and rebuild the system, just following the steps provided in the README to build it
and see the outcome.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants