Lab 2.1: Setup Kinect and Point Cloud Library
The goal of this lab is to initial the Kinect module and utilize the point cloud library (PCL).
Setup Point Cloud Library (PCL)
When ROS Indigo is installed in the Ubuntu, the PCL is old version, therefore, we have to install the newest one. The detail can be seen here“How to install PCL”.
sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl
sudo apt-get update
sudo apt-get install libpcl-all
Now the new PCL has been installed, we can see the pcl folder in the /usr/include/pcl-1.7.
The original folder of PCL in Indigo is called pcl_ros which is located at /opt/ros/indigo/include. We have to replace old (/opt/ros/indigo/include/pcl_ros) one with new one (/usr/include/pcl-1.7/pcl).
Open terminal and type following command to rename old folder to pcl_ros_original for backup:
sudo mv /opt/ros/indigo/include/pcl_ros /opt/ros/indigo/include/pcl_ros_original
Copy new PCL to Indigo folder, first we have to cd to new folder location:
sudo cp -r pcl /opt/ros/indigo/include/
Rename the pcl folder to pcl_ros:
sudo mv /opt/ros/indigo/include/pcl /opt/ros/indigo/include/pcl_ros
By doing this, we do not have to modified compile environment too much.
Setup Kinect using openni
By default, the openni which can be seen as driver for Kinect has been installed in the Indigo, therefore, we can use it to activate the Kinect module. Open terminal and type:
roslaunch openni_launch openni.launch
Then if everything goes well. you can see following:
process[camera_base_link2-21]: started with pid 
process[camera_base_link3-22]: started with pid 
[ INFO] [1439008450.485436108]: rgb_frame_id = ‘/camera_rgb_optical_frame’
[ INFO] [1439008450.485506096]: depth_frame_id = ‘/camera_depth_optical_frame’
[ WARN] [1439008450.492858021]: Camera calibration file /home/stevas/.ros/camera_info/rgb_A00366A13964129A.yaml not found.
[ WARN] [1439008450.492936659]: Using default parameters for RGB camera calibration.
[ WARN] [1439008450.492982691]: Camera calibration file /home/stevas/.ros/camera_info/depth_A00366A13964129A.yaml not found.
[ WARN] [1439008450.493017911]: Using default parameters for IR camera calibration.
If you connect the kinect but it still shows:
[ INFO] [1439008653.350860050]: No devices connected…. waiting for devices to be connected
[ INFO] [1439008656.351515535]: No devices connected…. waiting for devices to be connected
Then please visit here for fixing this problem.
Checking the depth view image
To visualize the depth camera output, type the following command in a subscribed node
rosrun image_view image_view image:=/camera/depth/image
you should be able to view the depth output published by the robot
Checking the colored view image
To visualize the color camera output, type the following command in a subscribed node
rosrun image_view image_view image:=/camera/rgb/image_color
you should be able to view the image output published by the robot
Visualize the Point Cloud of Kinect using rviz
After you launch the open.launch, the depth data and image data are published from Kinect. To make sure it work or not, we can use revise to visualize the those data.
Open new terminal and type following command:
Then you can see a GUI pop out. We are going to add the display for the data we want to visualize, see following image for instruction. First click add button and then choose the PointCloud2 display.
Then we have to assign the topic and choose the frame id for the data, see following image for instruction. First of all, we expand the tag for PointCloud2 at the left window, choose the camera/depth/points as the topic for this display. Secondly, we set the Fixed Frame field in Global Optional to camera_depth_frame. Then we can now see the point cloud from Kinect in real time.
Point Cloud manipulation basics
For many reasons, we usually need to modify the data we get from the kinect. To do that we create a node that subscribes to the data we want to modify and publishes the modified data on a topic defined by us.
Below you will find a simple code that only reads the data and publish it as it is.
void cloud_cb (const sensor_msgs::PointCloud2ConstPtr& input)
// Do data processing here…
output = *input;
// Publish the data.
int main (int argc, char** argv)
// Initialize ROS
ros::init (argc, argv, “kinect_sub”);
// Create a ROS subscriber for the input point cloud
ros::Subscriber sub = nh.subscribe (“/camera/depth/points”, 1, cloud_cb);
// Create a ROS publisher for the output point cloud
pub = nh.advertise<sensor_msgs::PointCloud2> (“output”, 1);
After running this code, a new topic will be available that publishes the same data.
Run rviz and check the data you are publishing.
Using the previous example carry out any data transformation on the kinect data and visualize the transformed data.