Skip to content

This repo contains my snippet and tutorials for Lie Group and Lie Algebra, Topology and Configuration of Robot and Space ,IMU, ROS2 Gazebo Integration, State Estimation, VIO, LIO, and Deep Learning based SLAM

Notifications You must be signed in to change notification settings

behnamasadi/robotic_notes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Installation and Requirement

Ubuntu alt text GitHub Issues or Pull Requests

GitHub Repo stars GitHub forks

C++ Dependencies

cd /home/$USER/workspace/
git clone [email protected]:behnamasadi/robotic_notes.git

vcpkg is configured as a git submodule. Initialize it:

cd /home/$USER/workspace/robotic_notes
git submodule update --init --recursive

If you need to add vcpkg as a submodule (if not already configured):

cd /home/$USER/workspace/robotic_notes
git submodule add https://github.com/microsoft/vcpkg.git vcpkg

Then run the bootstrap script. On Windows:

.\vcpkg\bootstrap-vcpkg.bat

On bash:

./vcpkg/bootstrap-vcpkg.sh

The bootstrap script performs prerequisite checks and downloads the vcpkg executable.

To update vcpkg to the latest version (when using as a submodule):

cd /home/$USER/workspace/robotic_notes
git submodule update --remote vcpkg

This will fetch the latest changes from the vcpkg repository and update the submodule to the latest commit.

Important: After updating vcpkg, it will show as modified in git status because the submodule is now pointing to a different commit than what's recorded in the parent repository. To persist this update:

git add vcpkg
git commit -m "Update vcpkg submodule to latest version"

To check the status of your submodules:

git submodule status

If you see a + prefix, it means the submodule has new commits that aren't recorded in the parent repository yet.

To reset vcpkg back to the commit recorded in the parent repository (if you don't want the update):

git submodule update vcpkg

Note: When using vcpkg in manifest mode (with vcpkg.json), you don't need to run vcpkg update. Instead, modify your vcpkg.json file and run cmake again, which will automatically install the updated packages. However, updating the vcpkg submodule itself can bring bug fixes and new features to the vcpkg tool.

set the path:

export VCPKG_ROOT=$PWD/vcpkg
export PATH=$VCPKG_ROOT:$PATH

Setting VCPKG_ROOT tells vcpkg where your vcpkg instance is located.

Install required system dependencies for vcpkg (on Linux):

sudo apt-get install -y bison flex build-essential cmake autoconf autoconf-archive automake libtool libltdl-dev libx11-dev libxft-dev libxext-dev libxtst-dev libxrandr-dev ninja-build pkg-config

These dependencies are needed for vcpkg to build packages like gettext, gperf, cairo (with x11 feature), at-spi2-core, gtk3, libxcrypt, and other C++ libraries. The libltdl-dev package provides libtool development files required by libxcrypt. The ninja-build and pkg-config packages are required for meson-based builds.

Note: The CMakeLists.txt is configured to build only release versions of vcpkg packages (not debug) to reduce build time and disk usage. This means you'll only see -- Configuring x64-linux-rel and not -- Configuring x64-linux-dbg when building. If you need debug builds, you can override this by setting -DVCPKG_TARGET_TRIPLET=x64-linux when running cmake.

Now you can run:

cmake -S . -B build \
  -DCMAKE_TOOLCHAIN_FILE=./vcpkg/scripts/buildsystems/vcpkg.cmake \
  -DCMAKE_BUILD_TYPE=Release \
  -DVCPKG_TARGET_TRIPLET=x64-linux-release

The VCPKG_TARGET_TRIPLET=x64-linux-release option ensures vcpkg only builds release packages, which significantly reduces build time (especially for large packages like OpenCV) and disk space usage. This is already configured in CMakeLists.txt, but you can explicitly set it as shown above.

cmake --build build --parallel

Testing CI/CD Locally

You can test the GitHub Actions workflow locally before pushing using act, which runs GitHub Actions workflows in Docker containers.

Prerequisites:

  • Docker installed and running
  • act installed (check with which act)

Install act (if not already installed):

# On Linux/macOS
curl https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash

# Or using package managers
# Ubuntu/Debian
sudo apt-get install act

# macOS
brew install act

Quick test (CMake configuration only):

cd /home/$USER/workspace/robotic_notes
# Test just the CMake configuration step (dry run)
act -j build -s GITHUB_TOKEN=dummy --dryrun

Full workflow test (takes 30+ minutes):

cd /home/$USER/workspace/robotic_notes
# Run the full workflow
act workflow_dispatch \
  -W . \
  --container-architecture linux/amd64 \
  -P ubuntu-latest=catthehacker/ubuntu:act-latest \
  --verbose

Test specific job:

# Run just the build job
act -j build -W . \
  --container-architecture linux/amd64 \
  -P ubuntu-latest=catthehacker/ubuntu:act-latest

Notes:

  • First run will download Docker images (can be large, ~10GB+)
  • Full build takes 30+ minutes as it builds all dependencies from scratch
  • Uses your local files, so you can test changes immediately
  • Some steps may behave slightly differently than on GitHub Actions

Faster alternative - test CMake configuration manually:

cd /home/$USER/workspace/robotic_notes
# Clean build directory
rm -rf build
# Run the same CMake command as CI/CD
cmake -S . -B build \
  -DCMAKE_TOOLCHAIN_FILE=./vcpkg/scripts/buildsystems/vcpkg.cmake \
  -DCMAKE_BUILD_TYPE=Release \
  -DVCPKG_TARGET_TRIPLET=x64-linux-release

To build the rerun, just comment everything in the CMakeLists.txt and only leave this part:

include(FetchContent)
FetchContent_Declare(rerun_sdk URL https://github.com/rerun-io/rerun/releases/latest/download/rerun_cpp_sdk.zip)
FetchContent_MakeAvailable(rerun_sdk)

Note: Building arrow_cpp (which is a dependency of rerun_sdk) can take a very long time and may fail with compilation errors (e.g., abseil build failures with newer GCC versions like GCC 13). If you encounter build errors with arrow_cpp, comment out all other code in CMakeLists.txt and only keep the rerun SDK part above. Also install the rerun server via:

Python Dependencies

conda create -n robotic_notes
conda activate robotic_notes
conda install python=3.13
cd /home/$USER/anaconda3/envs/robotic_notes/

Create this soft link.

ln -s /home/$USER/workspace/robotic_notes /home/$USER/anaconda3/envs/robotic_notes/src

Install the python packages:

pip3 install rerun-sdk
conda install -c conda-forge opencv
pip install graphslam
conda install conda-forge::gtsam
conda install conda-forge::matplotlib
conda install conda-forge::plotly
conda install -c conda-forge jupyterlab
pip install gradio_rerun
pip install ahrs
pip install pyceres
pip install liegroups

Docker Setup (ROS2 Humble + Gazebo Fortress)

A Dockerfile is provided for a complete development environment with ROS2 Humble and Gazebo Fortress pre-installed. This is the recommended way to work with ROS2 and Gazebo integration.

Build the Docker image:

cd /home/$USER/workspace/robotic_notes
docker build -t ros2-humble-gazebo-fortress .

Run the container:

# Basic run
docker run -it --rm ros2-humble-gazebo-fortress

# With X11 forwarding for GUI applications (rviz2, rqt, Gazebo)
docker run -it --rm \
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  ros2-humble-gazebo-fortress

# With GPU support (for NVIDIA GPUs)
docker run -it --rm \
  --gpus all \
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  ros2-humble-gazebo-fortress

# With workspace mount
docker run -it --rm \
  -e DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v $(pwd):/workspace \
  ros2-humble-gazebo-fortress

Delete existing containers:

# List all containers (including stopped ones)
docker ps -a

# Stop a running container (replace CONTAINER_ID or NAME with your container)
docker stop CONTAINER_ID_OR_NAME

# Delete a stopped container
docker rm CONTAINER_ID_OR_NAME

# Force delete a running container
docker rm -f CONTAINER_ID_OR_NAME

# Delete all stopped containers
docker container prune

# Delete all containers (stopped and running) - use with caution!
docker rm -f $(docker ps -aq)

The Dockerfile includes:

  • ROS2 Humble Desktop (full installation)
  • Gazebo Fortress (Gazebo Sim v6)
  • ROS2-Gazebo integration packages (ros-humble-ros-gz, ros-humble-ros-gz-bridge, etc.)
  • Navigation2, SLAM Toolbox, RViz2, and other common ROS2 packages
  • Development tools and dependencies
  • gz command wrapper for backward compatibility (maps gz sim → ign gazebo)

For detailed installation instructions, usage examples, and troubleshooting, see ROS2 Gazebo Integration Documentation.

Ceres Non-linear Least Squares

Lidar and IMU LIO

Radar SLAM

Add Apriltag to loop closure

Refs: 1

About

This repo contains my snippet and tutorials for Lie Group and Lie Algebra, Topology and Configuration of Robot and Space ,IMU, ROS2 Gazebo Integration, State Estimation, VIO, LIO, and Deep Learning based SLAM

Topics

Resources

Stars

Watchers

Forks

Contributors 2

  •  
  •