Lesson 13: Vision system-Xtion+Openni

Machine vision is currently generally used in depth camera, binocular camera and other solutions, but the more mature is the use of deep camera solutions. Depth camera is currently more popular are:

  • Microsoft Kinect
  • Intel RealSense
  • ASUS Xtion pro live

In Diego 1 # we used ASUS Xtion as a visual hardware solution, the main reason is that Xtion uses USB2.0 interface, the other two products are USB3.0 interface, while the raspberry faction only supports USB2.0 Interface; another Xtion relative volume is relatively small, no additional power supply.

Software package we use openni, openni is an open source 3D machine vision open source library, which provides a convenient and powerful API interface, and it is important to support Xtion pro, available in linux through openni to drive Xtion depth camera.

1.install OpenNI

1.1Install OpenNI

sudo apt-get install ros-kinetic-openni-camera
sudo apt-get install ros-kinetic-openni-launch

1.2.Install Xtion

sudo apt-get install libopenni-sensor-primesense0

1.3.start openni node

roslaunch openni_launch openni.launch

After the successful start, the terminal should display the following information

2.Converts Openni’s point cloud data to laser data

Openni available 3D point cloud data, and we can also transform 3D point cloud data into 2D laser data, can replace the laser radar。we will use the package pointcloud_to_laserscan(https://github.com/ros-perception/pointcloud_to_laserscan)to transform the data.

2.1.Install pointcloud_to_laserscan

clone the package to ~/catkin_ws/src

cd ~/catkin_ws/src
git clone https://github.com/ros-perception/pointcloud_to_laserscan

execute follow command to install ros-kinetic-tf2-sensor-msgs

sudo apt-get install ros-kinetic-tf2-sensor-msgs

compile

cd ~/catkin_ws/

catkin_make

modify sample_node.launch

<?xml version="1.0"?>

<launch>

<arg name="camera" default="camera" />

<!-- start sensor-->
<include file="$(find openni_launch)/launch/openni.launch">
<arg name="camera" default="$(arg camera)"/>
</include>

<!-- run pointcloud_to_laserscan node -->
<node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="pointcloud_to_laserscan">

<remap from="cloud_in" to="$(arg camera)/depth_registered/points_processed"/>
<remap from="scan" to="$(arg camera)/scan"/>
<rosparam>
target_frame: camera_link # Leave disabled to output scan in pointcloud frame
transform_tolerance: 0.01
min_height: 0.0
max_height: 1.0

angle_min: -1.5708 # -M_PI/2
angle_max: 1.5708 # M_PI/2
angle_increment: 0.087 # M_PI/360.0
scan_time: 0.3333
range_min: 0.45
range_max: 4.0
use_inf: true

# Concurrency level, affects number of pointclouds queued for processing and number of threads used
# 0 : Detect number of cores
# 1 : Single threaded
# 2->inf : Parallelism level
concurrency_level: 1
</rosparam>

</node>

</launch>

2.2.start sample node

roslaunch pointcloud_to_laserscan sample_node.launch

After the success of the start will appear the following information:

Execute the following command to view the laser data

rostopic echo /camera/scan


You can also use rviz to view laser data

rosrun rviz rviz


the depth of the picture

The above is the depth of the camera and its installation process and a simple conversion to laser data applications, the follow-up section will be based on the visual system to gradually develop other robot applications

Leave a Reply

Scroll to top
%d bloggers like this: