- Troubleshooting
- Submitting a git issue
- Clean reset
- Bad tracking performances
- Non-determinism and run-to-run variability
- Limitations of
main_vo.pyon General Datasets - Terminal Errors message
- Aborted (core dumped) or Segmentation fault (core dumped)
- RED ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts
- Cannot import DelfFeature2D from pyslam.local_features.feature_delf
- ModuleNotFoundError: No module named
hamming/color_utils/pyslam_utils/glutils/sim3solver/trajectory_tools/ not found - ImportError: /home/.../anaconda3/envs/pyslam/bin/../lib/libgcc_s.so.1: version `GCC_12.0.0' not found (required by /lib/-linux-gnu/libhwy.so.1)
- RuntimeError: The detected CUDA version (11.8) mismatches the version that was used to compile
- Gtk-ERROR **: ... GTK+ 2.x symbols detected. Using GTK+ 2.x and GTK+ 3 in the same process is not supported
- SURF error
- g2o Errors
- When loading a neural network with CUDA everything gets stuck
- Manual install instead of using the provided install scripts
This page contains a small collections of issues/errors that may be experienced along with their fixes.
FIRST OF ALL:
- Did you read the main README page?
- Did you use the provided INSTALL SCRIPTS?
If not then go back on the README page, read the few lines in the install section and launch the REQUIRED install script. The install scripts were created to perform all the required install operations for you and make the install process itself as smooth and painless as possible.
If you work under Ubuntu or MacOS, check the specific installation procedures reported in the main README page.
For faster support when opening a new git issue, please provide the following information:
- Git commit (ensure you are on lastest
mastercommit). - Dataset used (if not a standard one, it is expected the dataset is shared through a link)
- Code configuration:
- Modified parameters in
config.yaml - Modified parameters in
pyslam/config_parameters.py - Any other changes made to the codebase. You can easily provide this information by running:
git fetch origin git diff --name-status origin/master
- Modified parameters in
- System configuration:
- OS, CUDA version, etc.
- Is your OS native, or are you using it within Windows, VirtualBox, or a similar virtualization tool?
- Python enviornment: pyenv, conda, pixi?
- Full console log of your test (including the error messages). In the case of missing libraries, a full console log of the install process is paramount.
- Pictures of plots and 3D viewer
Providing this information is crucial for reproducing and debugging the issue efficiently.
If you want to perform a clean reset and launch a fresh new install (rebuilding everything from scratch), run the following commands:
./clean.sh --hard # clean build folders
./pyenv-delete.sh # delete the "pyslam" virtual environmentThis is ideal when you pull new updates and need a fresh build and install.
Due to the multi-threading system (tracking thread + local mapping thread) and the non-super-fast performances of the python implementations (indeed, python is not actually multithreading), bad tracking performances may occur and vary depending on your machine computation capabilities. In a few words, it may happen that the local mapping thread is not fast enough to spawn new map points in time for the tracking thread. In fact, new spawned map points are necessary to let the tracking thread find enough {keypoint}-{map point} correspondences, and hence stably grasp at the map and proceed along its estimated trajectory. Simply put, the local mapping thread continuously builds/unrolls the fundamental 'carpet' of points (the map) on which the tracking thread 'walks': no 'carpet', no party!
If you experience bad tracking performances, go in config_parameters.py and try to set kTrackingWaitForLocalMappingToGetIdle=True.
Some amount of run-to-run variability is expected in SLAM/VO pipelines, and it can be amplified in low-structure outdoor scenes (like soil/weed texture) where small early differences can flip later decisions (keyframe insertion, map point culling, relocalization, etc.).
AFAIK, main sources of non-determinism include:
- Multi-threading / scheduling effects: Several components run concurrently (and OpenCV itself may spawn internal worker threads). Even if you run sequentially, OS scheduling + thread timing can change the order of events (e.g., which measurements make it into a local map update before a decision is taken).
- Feature extraction/matching “order sensitivity”: Some local feature pipelines can be non-deterministic due to internal parallelism and/or order-dependent steps. ORB-SLAM2-style ORB (“ORB2”) is a known example; I added a deterministic variant you can test and enable here:
- test: thirdparty/orbslam2_features/test_orb_determinism.py
- enable: pyslam/local_features/feature_tracker_configs.py
- RANSAC / robust estimation randomness: RANSAC-family methods and several OpenCV routines are inherently stochastic. Setting a single Python seed is generally insufficient for full reproducibility, as randomness may arise from multiple layers (Python, NumPy, OpenCV’s internal C++ RNG, and per-thread execution in parallel code).
- Floating-point reduction order / parallel math (often overlooked): Even without “explicit randomness”, multi-threaded reductions (OpenMP / BLAS / TBB) can yield tiny numeric differences due to floating-point non-associativity; those small differences can cascade in SLAM.
See the related discussion here.
As explained in the main README, main_vo.py implements a minimal monocular visual odometry (VO) pipeline, focusing on the core geometric building blocks rather than a full SLAM system. In particular, it does not include:
- point triangulation and persistent map management
- multi-view optimization (e.g., local bundle adjustment)
- motion priors (e.g., IMU)
At each time step
The pipeline performs reasonably well on KITTI mainly because:
- the camera undergoes dominant forward motion (non-zero baseline)
- parallax is sufficient between consecutive frames
- pure rotations are rare
- the scene contains a good range of depths
These conditions make the essential matrix estimation decently/well-constrained, allowing a sufficiently stable recovery of both rotation and translation direction. On datasets like EuRoC or TUM, several fundamental limitations of this minimal approach become evident:
The essential matrix constraint
- In the case of pure rotation, translation is not observable at all
- The decomposition of
$E$ yields an arbitrary translation direction - This leads to scale drift and trajectory instability
When inter-frame motion is small:
- triangulation angles are very narrow
- depth becomes highly uncertain
- translation direction is poorly constrained
As a result:
- even small noise in feature matching leads to large pose errors
- scale estimation (even if injected externally) becomes inconsistent
Since the system:
- only uses two-view geometry
- does not refine poses jointly over a window
it cannot:
- correct drift over time
- enforce multi-view geometric consistency
This makes it very sensitive to:
- noisy matches
- outliers
- local degeneracies
Without persistent 3D points:
- there is no re-use of past structure
- each frame pair is treated independently
This prevents:
- stabilizing pose estimates using long-term constraints
- filtering inconsistent geometry over time
Datasets like EuRoC are specifically designed for visual-inertial and stereo odometry:
- IMU measurements and/or stereo vision provide metric scale observability and enable robust motion estimation, even under low-parallax or rotational motion
- without these additional sources of information, pure monocular VO becomes ill-posed or poorly conditioned in many segments (e.g., pure rotation, very small baseline, or low-texture scenes)
Similarly, TUM RGB-D provides depth, which:
- removes scale ambiguity
- avoids degeneracy in translation estimation
In summary, main_vo.py is intentionally a minimal and educational implementation of monocular VO.
It works under favorable motion conditions (e.g., KITTI), but it is expected to fail when:
- motion lacks sufficient translation
- parallax is small
- rotations dominate
- no multi-view optimization is available
For more challenging datasets (EuRoC, TUM) with degenerate or near-degenerate motions, a minimal two-view VO approach is insufficient. More complete methods are required, such as:
- visual-inertial odometry (VIO)
- stereo or RGB-D odometry
- or SLAM systems with multi-view optimization (e.g., local bundle adjustment)
These approaches provide additional constraints that make motion and scale estimation well-conditioned.
It is very likely this issue is related to matplolib and qt installation. See this issue for a related discussion. The quickest fix should be to reinstall matplotlib and possibly downgrade its version:
pip uninstall matplotlib
pip install "matplotlib<3.8" In another pySLAM issue, the problem was generated (under conda) by a conflicting version of pyarrow and it was fixed by running
pip install "pyarrow<19"If the problem persists, you can gather more details by running your script under gdb:
gdb -ex run --args python main_slam.py
# when it aborts:
(gdb) btPlease consider opening an issue and attaching the logs (including the gdb backtrace), so we can investigate further.
I got a segmentation fault problem due to a tensorflow import problem. I solved it with
pip uninstall tensorflow
pip install "tensorflow==2.15.*"Under mac a first step is to run:
python3 -I -X dev -X faulthandler main_slam.pyand check the output.
RED ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts
I have verified through numerous installation tests that the errors reported by pip's dependency resolver are typically warnings rather than critical issues. These messages do not prevent the main and test scripts from running successfully.
If you are hitting the following warning
WARNING: cannot import DelfFeature2D from pyslam.local_features.feature_delf, check the file docs/TROUBLESHOOTING.mdthen run the following commands:
cd <pyslam_root>/thirdparty/tensorflow_models/research/delf
protoc -I=. --python_out=. delf/protos/*.protoModuleNotFoundError: No module named hamming / color_utils / pyslam_utils / glutils / sim3solver / trajectory_tools / not found
If you encounter an error like the following one after installation:
ModuleNotFoundError: No module named haming not foundor, similarly, one of the following modules haming, glutils, sim3solver, trajectory_tools, etc that are built inside the cpp folder is missing, follow these steps.
- Verify the C++ build outputs: Check that the compiled .so libraries exist under
<pyslam>/cpp/lib:If any of these files are missing, the build did not complete successfully.cpp ├── lib │ ├── color_utils.cpython-311-<your-arc>-linux-gnu.so │ ├── glutils.cpython-311-<your-arc>-linux-gnu.so │ ├── hamming.cpython-311-<your-arc>-linux-gnu.so │ ├── pnpsolver.cpython-311-<your-arc>-linux-gnu.so │ ├── pyslam_utils.cpython-311-<your-arc>-linux-gnu.so │ ├── sim3solver.cpython-311-<your-arc>-linux-gnu.so │ └── trajectory_tools.cpython-311-<your-arc>-linux-gnu.so │ └── volumetric.cpython-311-x86_64-linux-gnu.so
- Confirm the custom OpenCV build is installed under
thirdparty/opencvand that all libraries exist in<pyslam-working-directory>/thirdparty/opencv/install/lib/. You can also run:cd <pyslam-working-directory> . pyenv-activate.sh # activate the Python environment ./scripts/opencv_check.py
- Rebuild the C++ pybind11 libraries: Run the following commands from your working copy of
pyslam:cd <pyslam-working-directory> . pyenv-activate.sh # activate the Python environment cd cpp ./clean.sh # remove previous build artifacts ./build.sh # rebuild pybind11 libraries
- Debug further if issues persist: Make sure you activated
<pyslam>Python environment. Inspect the build logs from./build.shfor possible missing dependencies or compiler errors.
If you are still stuck, feel free to open a git issue and include the terminal output you got with the previous commands.
ImportError: /home/.../anaconda3/envs/pyslam/bin/../lib/libgcc_s.so.1: version `GCC_12.0.0' not found (required by /lib/-linux-gnu/libhwy.so.1)
If you hit this issue, please refer to this discussion.
If you get the following error (or a similar one)
RuntimeError:
The detected CUDA version (11.8) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.
then your detected CUDA version should actually be 11.8 and the following command might help:
pip3 install torch==2.2.0+cu118 torchvision==0.17+cu118 --index-url https://download.pytorch.org/whl/cu118
Another solution is to install CUDA 12.1, as suggested by the error message.
To know which CUDA version is required by your installed torch version, you can run:
python -c "import torch; print(torch.__version__)"
In case your dected CUDA version is a different one, you can easily adjust the above command by changing the cuXYZ version code. See this reference for further details:
https://pytorch.org/get-started/previous-versions/
Gtk-ERROR **: ... GTK+ 2.x symbols detected. Using GTK+ 2.x and GTK+ 3 in the same process is not supported
If you hit such an error then decomment (or add) the following code in both main_vo.py and main_slam.py
# fix from https://github.com/tum-vision/fastfusion/issues/21
import gi
gi.require_version('Gtk', '2.0')
this will solve the problem.
In order to use non-free OpenCV features (i.e. SURF, etc.), you need to install the module opencv-contrib-python built with the enabled option OPENCV_ENABLE_NONFREE.
The provided install scripts will install a recent opencv version (>=4.10) with non-free modules enabled (see the provided scripts scripts/install_pip3_packages.sh and scripts/install_opencv_python.sh). To quickly verify your installed opencv version run:
$ . pyenv-activate.sh
$ ./scripts/opencv_check.py
or use the following command:
$ python3 -c "import cv2; print(cv2.__version__)"
How to check if you have non-free OpenCV module support (no errors imply success):
$ python3 -c "import cv2; detector = cv2.xfeatures2d.SURF_create()"
If you run into the following error
File ".../pyslam/optimizer_g2o.py", line 96, in optimization
edge.fx = f.fx
AttributeError: 'g2o.EdgeSE3ProjectXYZ' object has no attribute 'fx'
that's because you did not run the script
$ ./scripts/install_thirdparty.sh
as explained in the main README file. That's required in order to properly build and install the required thirdparty libs. Please,follow these steps:
- check you are on the correct pyslam branch according to your OS
- use the pyslam install scripts
- open a terminal in the root folder of the repo clean with
$ ./clean.sh - launch a new install with
$ ./scripts/install_all_venv.sh - Please, double check that you have the file like
thirdparty/g2opy/lib/g2o.cpython-36m-x86_64-linux-gnu.so.
If you get an error message like
import g2o
ModuleNotFoundError: No module named 'g2o' error
First of all, check if you have a compiled thirdparty/g2opy/lib/g2o.cpython-*-linux-gnu.so. If not, Did you use one of the install_all scripts? Depending on your selected working environment (native, conda, python3-venv), you need to launch its companion install_all script in order to actually install all the required libraries (including g2o). Please, read the install instruction in the main README file.
On the other hand, if you already have a compiled thirdparty/g2opy/lib/g2o.cpython-*-linux-gnu.so, it's very likely you have libraries compiled in a 'mixed' way. Then, try to clean everything with the script clean.sh, and follow the installation procedure again (see the main README file).
Last but not least, please recall that you need to activate your pyenv/conda environment before launching any pySLAM script.
I got this issue with a new NVIDIA GPU while loading the SuperPoint neural network. The load hangs because the CUDA code was not compiled for your GPU architecture. Two options:
- Quick workaround: disable CUDA (for instance, in
SuperPointFeature2D()set the default class optiondo_cuda=False). The computations will run on CPU. - Proper fix: install a PyTorch build that matches your CUDA toolkit and GPU architecture. See these links: https://stackoverflow.com/questions/75682385/runtimeerror-cuda-error-no-kernel-image-is-available-for-execution-on-the-devi https://discuss.pytorch.org/t/failed-to-load-image-python-extension-could-not-find-module/140278 The install scripts should have taken care of this, though.
If you really want to install things manually instead of using the install scripts, follow the same steps of the install scripts, and good luck!