Nvblox ROS 2 integration for local 3D scene reconstruction and mapping
Integration for Local 3D Scene Reconstruction and Mapping:
Nvblox is being used in conjunction with ROS 2 to perform the task reconstructing and mapping local 3D scenes.
Integrated system can take sensor data (like images or laser scans) from an environment and use it to build a three-dimensional model of that area. This is a crucial task in robotics for understanding and navigating the surrounding space.
-Contains ROS 2 packages for 3D reconstruction and cost maps for navigation
3D reconstruction: Building three-dimensional models of the environment using sensor data.
cost maps for navigation: Creating maps that help robots determine the best paths to take while avoiding obstacles.
-“isaac_ros_nvblox” processes depth and pose to reconstruct a 3D scene in real-time and outputs a 2D costmap for Nav2
Processes depth and pose:
Use information about how far objects are(depth) and the position/orientation(pose) of the robot or camera to understand the environment.
Reconstruct a 3D scene in real-time:
Quickly creates a 3D model of the surroundings as the robot moves
Outputs a 2D costmap for Nav2:
Produces a 2D map that shows where obstacles are, which is used for navigation(Nav2 is a part of ROS 2 for robot navigation).
-The costmap is used in planning during navigation as a vision-based solution to avoid obstacles
The 2D map (costmap) is essential for the robot’s planning process. It helps the robot see and avoid obstacles, ensuring safe and efficient movement.
-isaac_ros_nvblox is designed to work with depth-cameras and/or 3D LiDAR. The package uses GPU acceleration to compute a 3D reconstruction and 2D costmaps using nvblox, the underlying framework-independent C++ library.
GPU acceleration: Uses the processing power of a Graphics Processing Unit (GPU) to speed up calculations.
nvblox: A core library written in C++ that doesn’t depend on any specific framework, used for making these calculations.
Graph that uses isaac_ros_nvblox
1.Takes Depth image, Color image, and Pose as Input
Depth Image:
Type of image where each pixel value represents the distance from the camera to the object in the scene.
Often used in 3D modeling and robotics because they provide crucial information about the physical layout of the environment.
Color Image:
Standard photographic image that captures the visual appearance of the scene in color. It helps in identifying objects and their features based on their colors and textures.
Pose:
Position and orientation of the camera(or the robot carrying the camera) in the environment.
Includes data like where the camera is located, which direction it’s facing, and its tilt or angle.
This information is vital for understanding how the camera is situated relative to the objects it’s capturing in its images.
(2)Computes a 3D scene reconstruction on the GPU
-VSLAM:Pose is computed using visual_slam, or some other pose estimation node
(3)Plugin
Cost map plugin
:Reconstruction is sliced into an output cost map which is provided through a cost map plugin into Nav2
Footnote
Reconstruction is sliced into an output cost map:
After NvBlox completes the 3D reconstruction of the environment (using depth, color images, and pose), this reconstructed model is then processed to create a “cost map.” A cost map is a special type of map used in robotics. It represents the environment in a way that helps a robot determine where it can and cannot, or should and should not, go. The term “sliced” here means that the 3D model is converted into a format suitable for navigation, usually a 2D map.
Provided through a cost map plugin into Nav2:
This cost map is then made available to Nav2 (the navigation system in ROS 2) through a specific software component, known as a “plugin.” A plugin is like an add-on that provides extra features or functionality. In this case, the cost map plugin allows Nav2 to access and use the cost map for navigation purposes.
Mesh Visualization plugin
:An optional colorized 3D reconstruction is delivered into rviz using the mesh visualization plugin
Colorized 3D Reconstruction:
This is essentially a 3D model of the environment that includes color information, making it visually detailed and realistic.
Delivered into rviz:
Rviz is a visualization tool used in ROS (Robot Operating System) to visualize different types of data from robots, like sensor data, robot model, etc.
Using the mesh visualization plugin:
The 3D model is integrated into rviz using another plugin, specifically designed for this purpose. This plugin is called a “mesh visualization plugin,” and it allows users to see the 3D colorized reconstruction directly within the rviz environment.
(4)Nvblox streams mesh updates to RViz to update the reconstruction in real-time as it is built.
Optional colorized 3D reconstruction is delivered into rviz using the mesh visualization plugin:
Alongside the creation of a cost map for navigation, NvBlox also has the capability to generate a visual representation of the 3D reconstruction.
(1)Complete the Developer Environment Setup
Development flow currently supported by Isaac ROS is to build on your target platform.
Setup ROS 2 Humble in your host machine with the Isaac ROS Buildfarm
Setup dependencies with rosdep OR you can use the Isaac ROS Dev Docker-based development environment through run_dev.sh.
(Recommend that you set up your developer environment with Isaac ROS Dev Docker)
Footnote
ROS 2 Humble: Version of the Robot Operating System 2 (ROS 2), a popular software framework used in robotics
Host Machine: Own computer, where you do your development work
Isaac ROS Buildfarm: Specific setup or tool provided by Isaac ROS to help you install and configure ROS 2 Humble on your computer.
Setup dependencies with rosdep: “rosdep” is a tool in ROS that helps you install software dependencies. Dependencies are additional software packages that Isaac ROS needs to function properly.
Docker-based development environment:
Docker is a tool that allows you to create isolated environments, called containers, where you can run software without affecting the rest of your system. This sentence suggests that Isaac ROS provides a Docker container where you can develop your robotics software.
run_dev.sh:
Script(a set of automated commands) that you can run to set up or start this Docker-based development environment. By running this script, you’d be able to work in a controlled, consistent environment that’s specifically configured for Isaac ROS development
On x86_64 platforms:
Reference:https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html
Installing with apt
(1)Configure production repository
Adding the NVIDIA container toolkit repository securely to the system’s package manager. After these commands are executed, the system will be able to install and update the NVIDIA container toolkit using apt commands like sudo apt update and sudo apt install nvidia-container-toolkit
Download NVIDIA GPG key for NVIDIA container repository -> Add NVIDIA GPG Key -> Download NVIDIA Container Toolkit Repository List -> Modify and Add the Repository -> Save the Repository List
Optionally, configure the repository to use experimental packages:
Uncomment lines in the nvidia-container-toolkit.list file that contain the word “experimental”. In the context of software repositories, uncommenting a line would activate the repository specified on that line, allowing you to install packages from it.
(2)Update the packages list from the repository
Update the list of available packages and their versions, but it does not install or upgrade any packages.
(3)Install the NVIDIA Container Toolkit packages
Prerequisites
You installed a supported container engine (Docker, Containerd, CRI-O, Podman)
You installed the NVIDIA Container Toolkit.
Docker
(1)Uninstall old versions
Uninstall all conflicting packages
You installed the NVIDIA Container Toolkit.
(2)Uninstall Docker
Uninstall the Docker Engine, CLI, containerd, and Docker Compose packages:
Images, containers, volumes, or custom configuration files on your host aren’t automatically removed. Delete all images, containers, and volumes.
(3)Install using the apt repository
(3)-1 Set up Docker’s apt repository
Add Docker’s official GPG key
Add the repository to Apt sources
(3)-2 Install the Docker packages
(3)-3 Verify that the Docker Engine installation is successful by running the hello-world image.
Reference: https://docs.docker.com/engine/install/ubuntu/
containerd
Download the latest version of containerd from GitHub and extract the files to the /usr/local/ directory
Then, download the systemd service file and set it up so that you can manage the service via systemd.
Finally, start the containerd service using the below command.
Then, check the status of the containerd service.
Conflicting packages right now
Remove the configuration files associated with the uninstalled containerd package.
Reference:
https://www.itzgeek.com/how-tos/linux/ubuntu-how-tos/install-containerd-on-ubuntu-22-04.html
cri-o
(1)Update System
(2)Add CRI-O Kubic repository
Add the Kubic repository which host binary packages for Debian based systems.
(3)Install CRI-O on Ubuntu 20.04
Update the package list and then install CRI-O
Enable the CRI-O service so that it starts when Ubuntu does
Start the CRI-O service
Service status checked
Reference:
https://computingforgeeks.com/install-cri-o-container-runtime-on-ubuntu-linux/
podman
(1)Update the package index on your system
(2)Install the Podman dependencies
(3)Add the libcontainers repository
(4)Run the sudo apt-get update
(5)Install Podman
(6)Verify that Podman is installed correctly
Reference:
https://rameshponnusamy.medium.com/install-podman-in-ubuntu-20-04-442649400b3f
Configuring Docker
Configure the container runtime by using the nvidia-ctk command & Restart the Docker daemon
Configuring containerd
Configure the container runtime by using the nvidia-ctk command & Restart containerd
Configuring CRI-O
Configuring Podman
NVIDIA recommends using CDI for accessing NVIDIA devices in containers
Reference: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html
Support for Container Device Interface
About the Container Device Interface
CDI is an open specification for container runtimes that abstracts what access to a device, such as an NVIDIA GPU, means, and standardizes access across container runtimes. Popular container runtimes can read and process the specification to ensure that a device is available in a container. CDI simplifies adding support for devices such as NVIDIA GPUs because the specification is applicable to all container runtimes that support CDI.
CDI also improves the compatibility of the NVIDIA container stack with certain features such as rootless containers.
Generating a CDI specification
Prerequisites
You installed either the NVIDIA Container Toolkit or you installed the nvidia-container-toolkit-base package
You installed an NVIDIA GPU Driver
Confirmed installed GPU Driver
Procedure
(1)Generate the CDI specification file
(2)Check the names of the generated devices
Running a Workload with CDI
Potential conflict that can arise when using CDI in conjunction with the NVIDIA Container Runtime hook
Potential Conflict with NVIDIA Container Runtime Hook:
The NVIDIA Container Runtime includes a hook (oci-nvidia-hook) that automatically sets up the container environment to access NVIDIA GPUs.
If you’re using CDI to inject NVIDIA devices into your containers, and the oci-nvidia-hook is also present, they can interfere with each other because they are trying to manage the same devices.
Resolving the Conflict:
To avoid this conflict, you should remove or disable the oci-nvidia-hook.json file if you plan to use CDI.
Additionally, you should not set the NVIDIA_VISIBLE_DEVICES environment variable in your containers, as this is used by the NVIDIA hook to control GPU visibility.
Container Engine or CLI Support:
Your container engine or command-line interface (CLI) tool must support CDI for you to use it. As of version 4.1.0, Podman has included support for specifying CDI devices directly in the –device argument.
Running a Container with CDI:
With a CDI specification generated, you can run a container with access to NVIDIA GPUs by specifying the devices using the –device flag with the appropriate CDI device identifier.
Ignore CDI => https://forums.developer.nvidia.com/t/ubuntu-20-04-installation-nvidia-tao-platform-error/245708/8
Using CDI with Non-CDI-Enabled Runtimes:
To support runtimes that do not natively support CDI, you can configure the NVIDIA Container Runtime in a cdi mode.
In this mode, the NVIDIA Container Runtime does not inject the NVIDIA Container Runtime Hook into the incoming OCI runtime specification. Instead, the runtime performs the injection of the requested CDI devices.
The NVIDIA Container Runtime automatically uses cdi mode if you request devices by their CDI device names.
Using Docker as an example of a non-CDI-enabled runtime, the following command uses CDI to inject the requested devices into the container:
Setting the CDI Mode Explicitly
Force CDI mode by explicitly setting the nvidia-container-runtime.mode option in the NVIDIA Container Runtime config to cdi
In this case, the NVIDIA_VISIBLE_DEVICES environment variable is still used to select the devices to inject into the container, but the nvidia-container-runtime.modes.cdi.default-kind (with a default value of nvidia.com/gpu) is used to construct a fully-qualified CDI device name only when you specify a device index such as all, 0, or 1, and so on.
If CDI mode is explicitly enabled, the following sample command has the same effect as specifying NVIDIA_VISIBLE_DEVICES=nvidia.com/gpu=all.
Next Step
Install an NVIDIA GPU Driver if you do not already have one installed
Driver can be installed by using the package manager for your distribution, but other installation methods, such as downloading a .run file intaller, are available.
=>Already have one installed
Reference:
https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/index.html
Expected to use the ISAAC_ROS_WS environmental variable to refer to this ROS 2 workspace directory, in the future
2.Clone isaac_ros_common and this repository under ${ISAAC_ROS_WS}/src
Remove the isaac_ros_common directory,and then attempt the clone again
Remove isaac_ros_nvblox directory first and then run the clone command again
3.Pull down a ROS Bag of sample data
4.Launch the Docker container using the run_dev.sh script:
Error:
Solution
Create the docker group & Add your user to the docker group.
Log out and log back in so that your group membership is re-evaluated. Run the following command to activate the changes to groups
Verified that you can run docker commands without sudo
Reference:
https://github.com/git-lfs/git-lfs/issues/3964
Manage Docker as a non-root user:
https://docs.docker.com/engine/install/linux-postinstall/
5.Inside the container, install package-specific dependencies via rosdep
6.Build and source the workspace:
Error:
Solution:
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_visual_slam/issues/53
https://forums.developer.nvidia.com/t/nvblox-colcon-build-error/247962/2
Correct Version:
7.(Optional) Run tests to verify complete and correct installation:
8.In a current terminal inside the Docker container, run the launch file for Nvblox with nav2
9.Open a second terminal inside the docker container
10.In the second terminal, play the ROS Bag
Result
Reference:
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nvblox?tab=readme-ov-file
https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_nitros
https://nvidia-isaac-ros.github.io/getting_started/dev_env_setup.html