Isaac ROS Nvblox

Nvblox ROS 2 integration for local 3D scene reconstruction and mapping
Integration for Local 3D Scene Reconstruction and Mapping:
Nvblox is being used in conjunction with ROS 2 to perform the task reconstructing and mapping local 3D scenes.
Integrated system can take sensor data (like images or laser scans) from an environment and use it to build a three-dimensional model of that area. This is a crucial task in robotics for understanding and navigating the surrounding space.

Features

-Contains ROS 2 packages for 3D reconstruction and cost maps for navigation
3D reconstruction: Building three-dimensional models of the environment using sensor data.
cost maps for navigation: Creating maps that help robots determine the best paths to take while avoiding obstacles.
-“isaac_ros_nvblox” processes depth and pose to reconstruct a 3D scene in real-time and outputs a 2D costmap for Nav2
Processes depth and pose:
Use information about how far objects are(depth) and the position/orientation(pose) of the robot or camera to understand the environment.
Reconstruct a 3D scene in real-time:
Quickly creates a 3D model of the surroundings as the robot moves
Outputs a 2D costmap for Nav2:
Produces a 2D map that shows where obstacles are, which is used for navigation(Nav2 is a part of ROS 2 for robot navigation).
-The costmap is used in planning during navigation as a vision-based solution to avoid obstacles
The 2D map (costmap) is essential for the robot’s planning process. It helps the robot see and avoid obstacles, ensuring safe and efficient movement.
-isaac_ros_nvblox is designed to work with depth-cameras and/or 3D LiDAR. The package uses GPU acceleration to compute a 3D reconstruction and 2D costmaps using nvblox, the underlying framework-independent C++ library.
GPU acceleration: Uses the processing power of a Graphics Processing Unit (GPU) to speed up calculations.
nvblox: A core library written in C++ that doesn’t depend on any specific framework, used for making these calculations.

How NvBlox Operates

Nvblox1 Graph that uses isaac_ros_nvblox
1.Takes Depth image, Color image, and Pose as Input
Depth Image:
Type of image where each pixel value represents the distance from the camera to the object in the scene.
Often used in 3D modeling and robotics because they provide crucial information about the physical layout of the environment.
Color Image:
Standard photographic image that captures the visual appearance of the scene in color. It helps in identifying objects and their features based on their colors and textures.
Pose:
Position and orientation of the camera(or the robot carrying the camera) in the environment.
Includes data like where the camera is located, which direction it’s facing, and its tilt or angle.
This information is vital for understanding how the camera is situated relative to the objects it’s capturing in its images.
(2)Computes a 3D scene reconstruction on the GPU
-VSLAM:Pose is computed using visual_slam, or some other pose estimation node
(3)Plugin
Cost map plugin
:Reconstruction is sliced into an output cost map which is provided through a cost map plugin into Nav2
Footnote
Reconstruction is sliced into an output cost map:
After NvBlox completes the 3D reconstruction of the environment (using depth, color images, and pose), this reconstructed model is then processed to create a “cost map.” A cost map is a special type of map used in robotics. It represents the environment in a way that helps a robot determine where it can and cannot, or should and should not, go. The term “sliced” here means that the 3D model is converted into a format suitable for navigation, usually a 2D map.
Provided through a cost map plugin into Nav2:
This cost map is then made available to Nav2 (the navigation system in ROS 2) through a specific software component, known as a “plugin.” A plugin is like an add-on that provides extra features or functionality. In this case, the cost map plugin allows Nav2 to access and use the cost map for navigation purposes.
Mesh Visualization plugin
:An optional colorized 3D reconstruction is delivered into rviz using the mesh visualization plugin
Colorized 3D Reconstruction:
This is essentially a 3D model of the environment that includes color information, making it visually detailed and realistic.
Delivered into rviz:
Rviz is a visualization tool used in ROS (Robot Operating System) to visualize different types of data from robots, like sensor data, robot model, etc.
Using the mesh visualization plugin:
The 3D model is integrated into rviz using another plugin, specifically designed for this purpose. This plugin is called a “mesh visualization plugin,” and it allows users to see the 3D colorized reconstruction directly within the rviz environment.
(4)Nvblox streams mesh updates to RViz to update the reconstruction in real-time as it is built.
Optional colorized 3D reconstruction is delivered into rviz using the mesh visualization plugin:
Alongside the creation of a cost map for navigation, NvBlox also has the capability to generate a visual representation of the 3D reconstruction.

Quick Start

(1)Complete the Developer Environment Setup
Development flow currently supported by Isaac ROS is to build on your target platform.
Setup ROS 2 Humble in your host machine with the Isaac ROS Buildfarm
Setup dependencies with rosdep OR you can use the Isaac ROS Dev Docker-based development environment through run_dev.sh.
(Recommend that you set up your developer environment with Isaac ROS Dev Docker)
Footnote
ROS 2 Humble: Version of the Robot Operating System 2 (ROS 2), a popular software framework used in robotics
Host Machine: Own computer, where you do your development work
Isaac ROS Buildfarm: Specific setup or tool provided by Isaac ROS to help you install and configure ROS 2 Humble on your computer.
Setup dependencies with rosdep: “rosdep” is a tool in ROS that helps you install software dependencies. Dependencies are additional software packages that Isaac ROS needs to function properly.
Docker-based development environment:
Docker is a tool that allows you to create isolated environments, called containers, where you can run software without affecting the rest of your system. This sentence suggests that Isaac ROS provides a Docker container where you can develop your robotics software.
run_dev.sh:
Script(a set of automated commands) that you can run to set up or start this Docker-based development environment. By running this script, you’d be able to work in a controlled, consistent environment that’s specifically configured for Isaac ROS development
On x86_64 platforms:
Reference:https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html

Install nvidia-container-toolkit

Installing with apt
(1)Configure production repository
Nvblox6
Adding the NVIDIA container toolkit repository securely to the system’s package manager. After these commands are executed, the system will be able to install and update the NVIDIA container toolkit using apt commands like sudo apt update and sudo apt install nvidia-container-toolkit
Download NVIDIA GPG key for NVIDIA container repository -> Add NVIDIA GPG Key -> Download NVIDIA Container Toolkit Repository List -> Modify and Add the Repository -> Save the Repository List
Optionally, configure the repository to use experimental packages:
Nvblox7
Uncomment lines in the nvidia-container-toolkit.list file that contain the word “experimental”. In the context of software repositories, uncommenting a line would activate the repository specified on that line, allowing you to install packages from it.
(2)Update the packages list from the repository
Nvblox8
Update the list of available packages and their versions, but it does not install or upgrade any packages.
(3)Install the NVIDIA Container Toolkit packages
Nvblox9

Configure nvidia-container-toolkit for Docker

Prerequisites
You installed a supported container engine (Docker, Containerd, CRI-O, Podman)
You installed the NVIDIA Container Toolkit.
Docker
(1)Uninstall old versions
Uninstall all conflicting packages
Nvblox10
You installed the NVIDIA Container Toolkit.
(2)Uninstall Docker
Nvblox11
Uninstall the Docker Engine, CLI, containerd, and Docker Compose packages:
Nvblox12
Images, containers, volumes, or custom configuration files on your host aren’t automatically removed. Delete all images, containers, and volumes.
(3)Install using the apt repository
(3)-1 Set up Docker’s apt repository
Nvblox13
Nvblox14
Nvblox15
Nvblox16
Nvblox17
Add Docker’s official GPG key
Nvblox18
Nvblox19
Add the repository to Apt sources
(3)-2 Install the Docker packages
Nvblox20
(3)-3 Verify that the Docker Engine installation is successful by running the hello-world image.
Nvblox21
Reference: https://docs.docker.com/engine/install/ubuntu/
containerd
Nvblox22
Nvblox23
Download the latest version of containerd from GitHub and extract the files to the /usr/local/ directory
Nvblox24
Nvblox25
Then, download the systemd service file and set it up so that you can manage the service via systemd.
Nvblox26
Finally, start the containerd service using the below command.
Nvblox27
Then, check the status of the containerd service.
Nvblox28
Conflicting packages right now
Nvblox28
Nvblox29
Remove the configuration files associated with the uninstalled containerd package.
Reference:
https://www.itzgeek.com/how-tos/linux/ubuntu-how-tos/install-containerd-on-ubuntu-22-04.html
cri-o
(1)Update System
Nvblox30
(2)Add CRI-O Kubic repository
Nvblox32
Add the Kubic repository which host binary packages for Debian based systems.
(3)Install CRI-O on Ubuntu 20.04
Nvblox33
Nvblox34
Update the package list and then install CRI-O
Nvblox35
Enable the CRI-O service so that it starts when Ubuntu does
Nvblox36
Start the CRI-O service
Nvblox37 Service status checked
Reference:
https://computingforgeeks.com/install-cri-o-container-runtime-on-ubuntu-linux/
podman
(1)Update the package index on your system
Nvblox38
(2)Install the Podman dependencies
Nvblox39
(3)Add the libcontainers repository
Nvblox40
(4)Run the sudo apt-get update
Nvblox41
(5)Install Podman
Nvblox42
(6)Verify that Podman is installed correctly
Nvblox43
Reference:
https://rameshponnusamy.medium.com/install-podman-in-ubuntu-20-04-442649400b3f

Configuring Docker
Configure the container runtime by using the nvidia-ctk command & Restart the Docker daemon
Nvblox44
Configuring containerd
Configure the container runtime by using the nvidia-ctk command & Restart containerd
Nvblox45
Configuring CRI-O
Nvblox46
Configuring Podman
NVIDIA recommends using CDI for accessing NVIDIA devices in containers
Reference: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html
Support for Container Device Interface
About the Container Device Interface
CDI is an open specification for container runtimes that abstracts what access to a device, such as an NVIDIA GPU, means, and standardizes access across container runtimes. Popular container runtimes can read and process the specification to ensure that a device is available in a container. CDI simplifies adding support for devices such as NVIDIA GPUs because the specification is applicable to all container runtimes that support CDI.
CDI also improves the compatibility of the NVIDIA container stack with certain features such as rootless containers.
Generating a CDI specification
Prerequisites
You installed either the NVIDIA Container Toolkit or you installed the nvidia-container-toolkit-base package
You installed an NVIDIA GPU Driver
Confirmed installed GPU Driver
Nvblox47
Procedure
(1)Generate the CDI specification file
Nvblox48
(2)Check the names of the generated devices
Nvblox49
Running a Workload with CDI
Potential conflict that can arise when using CDI in conjunction with the NVIDIA Container Runtime hook
Potential Conflict with NVIDIA Container Runtime Hook:
The NVIDIA Container Runtime includes a hook (oci-nvidia-hook) that automatically sets up the container environment to access NVIDIA GPUs.
If you’re using CDI to inject NVIDIA devices into your containers, and the oci-nvidia-hook is also present, they can interfere with each other because they are trying to manage the same devices.
Resolving the Conflict:
To avoid this conflict, you should remove or disable the oci-nvidia-hook.json file if you plan to use CDI.
Additionally, you should not set the NVIDIA_VISIBLE_DEVICES environment variable in your containers, as this is used by the NVIDIA hook to control GPU visibility.
Container Engine or CLI Support:
Your container engine or command-line interface (CLI) tool must support CDI for you to use it. As of version 4.1.0, Podman has included support for specifying CDI devices directly in the –device argument.
Running a Container with CDI:
With a CDI specification generated, you can run a container with access to NVIDIA GPUs by specifying the devices using the –device flag with the appropriate CDI device identifier.
Ignore CDI => https://forums.developer.nvidia.com/t/ubuntu-20-04-installation-nvidia-tao-platform-error/245708/8
Using CDI with Non-CDI-Enabled Runtimes:
To support runtimes that do not natively support CDI, you can configure the NVIDIA Container Runtime in a cdi mode. In this mode, the NVIDIA Container Runtime does not inject the NVIDIA Container Runtime Hook into the incoming OCI runtime specification. Instead, the runtime performs the injection of the requested CDI devices.
The NVIDIA Container Runtime automatically uses cdi mode if you request devices by their CDI device names.
Using Docker as an example of a non-CDI-enabled runtime, the following command uses CDI to inject the requested devices into the container:
Nvblox50
Setting the CDI Mode Explicitly
Nvblox51
Force CDI mode by explicitly setting the nvidia-container-runtime.mode option in the NVIDIA Container Runtime config to cdi
In this case, the NVIDIA_VISIBLE_DEVICES environment variable is still used to select the devices to inject into the container, but the nvidia-container-runtime.modes.cdi.default-kind (with a default value of nvidia.com/gpu) is used to construct a fully-qualified CDI device name only when you specify a device index such as all, 0, or 1, and so on.
Nvblox52
If CDI mode is explicitly enabled, the following sample command has the same effect as specifying NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all.
Next Step
Install an NVIDIA GPU Driver if you do not already have one installed
Driver can be installed by using the package manager for your distribution, but other installation methods, such as downloading a .run file intaller, are available.
=>Already have one installed
Reference:
https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/index.html

Restart Docker

Nvblox53

Install Git LFS to pull down all large files

Nvblox54
Nvblox55

Create a ROS 2 workspace for experimenting with Isaac ROS:

Nvblox56
Expected to use the ISAAC_ROS_WS environmental variable to refer to this ROS 2 workspace directory, in the future
2.Clone isaac_ros_common and this repository under ${ISAAC_ROS_WS}/src
Nvblox57
Nvblox58
Remove the isaac_ros_common directory,and then attempt the clone again
Nvblox59
Remove isaac_ros_nvblox directory first and then run the clone command again
Nvblox60
3.Pull down a ROS Bag of sample data
Nvblox61
4.Launch the Docker container using the run_dev.sh script:
Error:
Nvblox62
Solution
Nvblox63
Create the docker group & Add your user to the docker group.
Nvblox64
Log out and log back in so that your group membership is re-evaluated. Run the following command to activate the changes to groups
Nvblox65
Verified that you can run docker commands without sudo
Nvblox66
Reference:
https://github.com/git-lfs/git-lfs/issues/3964
Manage Docker as a non-root user:
https://docs.docker.com/engine/install/linux-postinstall/
5.Inside the container, install package-specific dependencies via rosdep
Nvblox67
6.Build and source the workspace:
Nvblox68
Error:
Nvblox69
Solution:
Nvblox71
Nvblox72
Nvblox70
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam/issues/53
https://forums.developer.nvidia.com/t/nvblox-colcon-build-error/247962/2
Correct Version:
Nvblox78
7.(Optional) Run tests to verify complete and correct installation:
Nvblox79
8.In a current terminal inside the Docker container, run the launch file for Nvblox with nav2
Nvblox81
9.Open a second terminal inside the docker container
Nvblox82
10.In the second terminal, play the ROS Bag
Nvblox80
Result
Nvblox77
Nvblox83

Reference:
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox?tab=readme-ov-file
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros
https://nvidia-isaac-ros.github.io/getting_started/dev_env_setup.html