cd /home/$USER/workspace/
git clone [email protected]:behnamasadi/robotic_notes.git
vcpkg is configured as a git submodule. Initialize it:
cd /home/$USER/workspace/robotic_notes
git submodule update --init --recursive
If you need to add vcpkg as a submodule (if not already configured):
cd /home/$USER/workspace/robotic_notes
git submodule add https://github.com/microsoft/vcpkg.git vcpkg
Then run the bootstrap script. On Windows:
.\vcpkg\bootstrap-vcpkg.bat
On bash:
./vcpkg/bootstrap-vcpkg.sh
The bootstrap script performs prerequisite checks and downloads the vcpkg executable.
To update vcpkg to the latest version (when using as a submodule):
cd /home/$USER/workspace/robotic_notes
git submodule update --remote vcpkg
This will fetch the latest changes from the vcpkg repository and update the submodule to the latest commit.
Important: After updating vcpkg, it will show as modified in git status because the submodule is now pointing to a different commit than what's recorded in the parent repository. To persist this update:
git add vcpkg
git commit -m "Update vcpkg submodule to latest version"
To check the status of your submodules:
git submodule status
If you see a + prefix, it means the submodule has new commits that aren't recorded in the parent repository yet.
To reset vcpkg back to the commit recorded in the parent repository (if you don't want the update):
git submodule update vcpkg
Note: When using vcpkg in manifest mode (with vcpkg.json), you don't need to run vcpkg update. Instead, modify your vcpkg.json file and run cmake again, which will automatically install the updated packages. However, updating the vcpkg submodule itself can bring bug fixes and new features to the vcpkg tool.
set the path:
export VCPKG_ROOT=$PWD/vcpkg
export PATH=$VCPKG_ROOT:$PATH
Setting VCPKG_ROOT tells vcpkg where your vcpkg instance is located.
Install required system dependencies for vcpkg (on Linux):
sudo apt-get install -y bison flex build-essential cmake autoconf autoconf-archive automake libtool libltdl-dev libx11-dev libxft-dev libxext-dev libxtst-dev libxrandr-dev ninja-build pkg-config
These dependencies are needed for vcpkg to build packages like gettext, gperf, cairo (with x11 feature), at-spi2-core, gtk3, libxcrypt, and other C++ libraries. The libltdl-dev package provides libtool development files required by libxcrypt. The ninja-build and pkg-config packages are required for meson-based builds.
Note: The CMakeLists.txt is configured to build only release versions of vcpkg packages (not debug) to reduce build time and disk usage. This means you'll only see -- Configuring x64-linux-rel and not -- Configuring x64-linux-dbg when building. If you need debug builds, you can override this by setting -DVCPKG_TARGET_TRIPLET=x64-linux when running cmake.
Now you can run:
cmake -S . -B build \
-DCMAKE_TOOLCHAIN_FILE=./vcpkg/scripts/buildsystems/vcpkg.cmake \
-DCMAKE_BUILD_TYPE=Release \
-DVCPKG_TARGET_TRIPLET=x64-linux-release
The VCPKG_TARGET_TRIPLET=x64-linux-release option ensures vcpkg only builds release packages, which significantly reduces build time (especially for large packages like OpenCV) and disk space usage. This is already configured in CMakeLists.txt, but you can explicitly set it as shown above.
cmake --build build --parallel
You can test the GitHub Actions workflow locally before pushing using act, which runs GitHub Actions workflows in Docker containers.
Prerequisites:
- Docker installed and running
actinstalled (check withwhich act)
Install act (if not already installed):
# On Linux/macOS
curl https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
# Or using package managers
# Ubuntu/Debian
sudo apt-get install act
# macOS
brew install actQuick test (CMake configuration only):
cd /home/$USER/workspace/robotic_notes
# Test just the CMake configuration step (dry run)
act -j build -s GITHUB_TOKEN=dummy --dryrunFull workflow test (takes 30+ minutes):
cd /home/$USER/workspace/robotic_notes
# Run the full workflow
act workflow_dispatch \
-W . \
--container-architecture linux/amd64 \
-P ubuntu-latest=catthehacker/ubuntu:act-latest \
--verboseTest specific job:
# Run just the build job
act -j build -W . \
--container-architecture linux/amd64 \
-P ubuntu-latest=catthehacker/ubuntu:act-latestNotes:
- First run will download Docker images (can be large, ~10GB+)
- Full build takes 30+ minutes as it builds all dependencies from scratch
- Uses your local files, so you can test changes immediately
- Some steps may behave slightly differently than on GitHub Actions
Faster alternative - test CMake configuration manually:
cd /home/$USER/workspace/robotic_notes
# Clean build directory
rm -rf build
# Run the same CMake command as CI/CD
cmake -S . -B build \
-DCMAKE_TOOLCHAIN_FILE=./vcpkg/scripts/buildsystems/vcpkg.cmake \
-DCMAKE_BUILD_TYPE=Release \
-DVCPKG_TARGET_TRIPLET=x64-linux-releaseTo build the rerun, just comment everything in the CMakeLists.txt and only leave this part:
include(FetchContent)
FetchContent_Declare(rerun_sdk URL https://github.com/rerun-io/rerun/releases/latest/download/rerun_cpp_sdk.zip)
FetchContent_MakeAvailable(rerun_sdk)
Note: Building arrow_cpp (which is a dependency of rerun_sdk) can take a very long time and may fail with compilation errors (e.g., abseil build failures with newer GCC versions like GCC 13). If you encounter build errors with arrow_cpp, comment out all other code in CMakeLists.txt and only keep the rerun SDK part above. Also install the rerun server via:
conda create -n robotic_notes
conda activate robotic_notes
conda install python=3.13
cd /home/$USER/anaconda3/envs/robotic_notes/
Create this soft link.
ln -s /home/$USER/workspace/robotic_notes /home/$USER/anaconda3/envs/robotic_notes/src
Install the python packages:
pip3 install rerun-sdk
conda install -c conda-forge opencv
pip install graphslam
conda install conda-forge::gtsam
conda install conda-forge::matplotlib
conda install conda-forge::plotly
conda install -c conda-forge jupyterlab
pip install gradio_rerun
pip install ahrs
pip install pyceres
pip install liegroups
A Dockerfile is provided for a complete development environment with ROS2 Humble and Gazebo Fortress pre-installed. This is the recommended way to work with ROS2 and Gazebo integration.
Build the Docker image:
cd /home/$USER/workspace/robotic_notes
docker build -t ros2-humble-gazebo-fortress .Run the container:
# Basic run
docker run -it --rm ros2-humble-gazebo-fortress
# With X11 forwarding for GUI applications (rviz2, rqt, Gazebo)
docker run -it --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
ros2-humble-gazebo-fortress
# With GPU support (for NVIDIA GPUs)
docker run -it --rm \
--gpus all \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
ros2-humble-gazebo-fortress
# With workspace mount
docker run -it --rm \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $(pwd):/workspace \
ros2-humble-gazebo-fortressDelete existing containers:
# List all containers (including stopped ones)
docker ps -a
# Stop a running container (replace CONTAINER_ID or NAME with your container)
docker stop CONTAINER_ID_OR_NAME
# Delete a stopped container
docker rm CONTAINER_ID_OR_NAME
# Force delete a running container
docker rm -f CONTAINER_ID_OR_NAME
# Delete all stopped containers
docker container prune
# Delete all containers (stopped and running) - use with caution!
docker rm -f $(docker ps -aq)The Dockerfile includes:
- ROS2 Humble Desktop (full installation)
- Gazebo Fortress (Gazebo Sim v6)
- ROS2-Gazebo integration packages (
ros-humble-ros-gz,ros-humble-ros-gz-bridge, etc.) - Navigation2, SLAM Toolbox, RViz2, and other common ROS2 packages
- Development tools and dependencies
gzcommand wrapper for backward compatibility (mapsgz sim→ign gazebo)
For detailed installation instructions, usage examples, and troubleshooting, see ROS2 Gazebo Integration Documentation.
- Configuration of Robot
- Configuration Space - (C-space )
- Degrees of freedom.
- Task Space
- Work Space
- Dexterous space:
- dof
- Topology
- Algebraic topology (playlist)
- Non-Holonomic Constraints, Pfaffian Constraints and Holonomic Constraints
- Kinematics of Differential Drive Robots and Wheel odometry
- Velocity-based (dead reckoning)
- Nonlinear uncertainty model associated with a robot's position over time (The Banana Distribution is Gaussian)
- 1. Global References
- 2. Accelerometer Model
- 3. Gyroscope Model
- 4. Attitude from gravity (Tilt)
- Expressing IMU reading with Quaternion
- 5. Quaternion from Accelerometer
- 6. Quaternion Integration
- 7.1 Quaternion Derivative
- Relationship Between Euler-Angle Rates and Body-Axis Rates
- Complementary Filter
- Quaternion-Based Complementary Filter
- Accelerometer-Based Correction
- Attitude from angular rate (Attitude propagation)
- IMU Integration
- Noise Spectral Density
- Signal-to-noise Ratio
- Allan Variance curve
- Autoregressive model
- Madgwick Orientation Filter
- Mahony Orientation Filter
- Simulating IMU Measurements
- IMU Propagation Derivations
- IMU Noise Model
- The standard deviation of the discrete-time noise process
- Datasets and Calibration Targets
- Supported Camera Models and Distortion
- Camera Calibration
- Camera IMU Calibration
- 1. Names
- 2. Remapping Arguments
- NodeHandles
- Roslaunch
- URDF
- Publishing the State
- ROS best practices
- move_base
- ROS Odometery Model
- ROS State Estimation
- EKF Implementations
- Differential Drive Wheel Systems
- Installation
- Configuration
- Colcon
- Using Xacro
- Creating a launch file
- Nav2 - ROS 2 Navigation Stack
- teleop_twist_keyboard
- Gazebo Versions
- Installation
- Building a model
- Building world
- Moving the robot
- Sensors
- Spawn URDF
- ROS 2 integration
- Bayes filter
- Extended Kalman Filter
- EKF Implementations
- EKF for Differential Drive Robot
- EKF
- Invariant extended Kalman filter EKF
- Multi-State Constraint Kalman Filter (MSCKF)
- STATE ESTIMATION FOR ROBOTICS
- FilterPy
- Quaternion kinematics for the error-state Kalman filter
- Error State Kalman Filter (ESKF)
- Error State Extended Kalman Filter EFK ES
- Error-State-Extended-Kalman-Filter(IMU, a GNSS, and a LiDAR)
- Active Exposure Control for Robust Visual Odometry in HDR Environments
- Pose Graph SLAM
- nano-pgo
- g2o
- Factor Graph GTSAM iSAM2
- Resilient Autonomy in Perceptually-degraded Environments
- HBA Large-Scale LiDAR Mapping Module
- Hierarchical, multi-resolution volumetric mapping (wavemap)
- kiss-icp
- TagSLAM SLAM with tags
- OpenDroneMap
- Interactive SLAM
- Volumetric TSDF Fusion of Multiple Depth Maps
- Euclidean Signed Distance Field (ESDF)
- Lidar odometry smoothing using ES EKF and KissICP for Ouster sensors with IMUs
- Multisensor-aided Inertial Navigation System (MINS)
- GLOMAP explained
- Zero-Shot Point Cloud Registration
- Open Keyframe-based Visual-Inertial SLAM okvis
- HybVIO
- SVO Pro
- OpenVINS
- Error State Kalman Filter VIO (ESKF-VIO)
- Kimera-VIO
- 3D Mapping Library For Autonomous Robots
- Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots
- A Comparison of Modern General-Purpose Visual SLAM Approaches
- ETH3D
- rvp group
- A Stereo Event Camera Dataset for Driving Scenarios DSEC
- FAST-LIO (Fast LiDAR-Inertial Odometry)
- incremental Generalized Iterative Closest Point (GICP) based tightly-coupled LiDAR-inertial odometry (LIO), iG-LIO
- Direct LiDAR-Inertial Odometry: Lightweight LIO with Continuous-Time Motion Correction
- Robust Real-time LiDAR-inertial Initialization
- CT-LIO: Continuous-Time LiDAR-Inertial Odometry
- Lidar SLAM for Automated Driving (MATLAB learning)
- LIO-SAM
- GLIM
- Lidar-Monocular Visual Odometry
Refs: 1
- Robust Rotation Averaging
- Bundler
- Noah Snavely Reprojection Error
- Global Structure-from-Motion Revisited
- LightGlue
- DenseSFM
- Pixel-Perfect Structure-from-Motion
- image-matching-webui
- Gaussian Splatting
- GANeRF
- DSAC*
- Tracking Any Point (TAP)
- image-matching-benchmark
- Local Feature Matching at Light Speed
- Hierarchical Localization
- instant-ngp
- NeRF-SLAM
- DROID-SLAM
- ACE0
- A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets -
- DoubleTake:Geometry Guided Depth Estimation
- Mitigating Motion Blur in Neural Radiance Fields with Events and Frames
- LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry
- MegaScenes: Scene-Level View Synthesis at Scale
- Intrinsic Image Diffusion for Indoor Single-view Material Estimation
- Vidu4D: Single Generated Video to High-Fidelity 4D Reconstruction with Dynamic Gaussian Surfels
- Detector-Free Structure from Motion
- Continuous 3D Perception Model with Persistent State
- MegaSam: Accurate, Fast and Robust Casual Structure and Motion from Casual Dynamic Videos
- Mast3r Slam with Rerun
- Stereo Any Video:Temporally Consistent Stereo Matching
- Multi-view Reconstruction via SfM-guided Monocular Depth Estimation
- UniK3D: Universal Camera Monocular 3D Estimation
- Depth Any Camera: Zero-Shot Metric Depth Estimation from Any Camera
- Fast3R: Towards 3D Reconstruction of 1000+ Images in One Forward Pass
- VGGT: Visual Geometry Grounded Transformer
- MAGiC-SLAM: Multi-Agent Gaussian Globally Consistent SLAM
- Speedy MASt3R
- CURL-MAP
- Procrustes Analysis
- Wahba's Problem
- Quaternion Estimator Algorithm (QUEST)
- Kabsch Algorithm
- Umeyama Algorithm
- Iterative Closest Point (ICP)
- KISS-ICP
- Modern Robotics Mechanics, Planning, and Control (Kevin M. Lynch, Frank C. Park)
- Modern Robotics Mechanics, Planning, and Control (Instructor Solution Manual, Solutions )
- MODERN ROBOTICS MECHANICS, PLANNING, AND CONTROL (Practice Exercises)
- Basic Knowledge on Visual SLAM: From Theory to Practice, by Xiang Gao, Tao Zhang, Qinrui Yan and Yi Liu
- STATE ESTIMATION FOR ROBOTICS (Timothy D. Barfoot)
- SLAM for Dummies
- VSLAM Handbook
- SLAM Handbook
- Matrix Calculus (for Machine Learning and Beyond)
- Reinforcement Learning: A Comprehensive Overview