Lesson 1: Hardware BOM

diego 3轴机械臂,采用三轴码垛结构,三轴都采用步进电机控制,装配好后如下图所示

如下表格列出了Diego 3轴机械臂所用硬件物料的详细清单,包括每个硬件图片、名称、数量、功能特性和在机械臂中的作用。

PhotoNameCountFeaturediego 3dof 码垛中的作用
Arduino UNO1•处理器 ATmega328
•工作电压 5V
•输入电压(推荐) 7-12V
•输入电压(范围) 6-20V
•数字IO脚 14 (其中6路作为PWM输出)
•模拟输入脚 6
IO脚直流电流 40 mA
•3.3V脚直流电流 50 mA
•Flash Memory 32 KB (ATmega328,其中0.5 KB 用于 bootloader)
•SRAM 2 KB (ATmega328)
•EEPROM 1 KB (ATmega328)
•工作时钟 16 MHz
•主控制板

Arduino cnc shield 1•步进电机驱动扩展板•驱动机械臂步进电机
A49883•步进电机驱动•驱动机械臂步进电机
42步进电机3•42步进电机
• 星形减速,减速比15:1
• 步进角度为1.8°
•机械臂3个轴的驱动马达
12V 18650 battery1•标称电压 :12V
•放电电流 :6.0A
•充电电流 :0.5A
•内阻 :≤60mΩ
•最高充电电压 :12.6V
•最低放电电压 :9.0V
•为6自由度机械臂舵机供电
•为机械爪舵机供电

硬件安装过程中使用的到工具,螺丝刀,老虎钳,电钻等工具未列出来,读者可以根据实际需求自行准备。

Lesson 21:moveit-the driver for diego1# plus

在上节课中我们已经通过moveit assistant完成了针对diego 1# plus的配置,但如果想让机械臂动起来,我们还需要对编写驱动代码:代码原理如下:

首先,moveit把计算的结果通过Ros action的方式发送给driver,driver调用Ros_arduino_bridge的servor_write server发送各个关节舵机的控制指令给Arduino uno控制板
其次,同时Driver也要通过调用Ros_arduino_bridge的servo_read服务读取各个关节的舵机状态,通过joint_state消息的方式发送给moveit,供moveit进行路径规划计算。

接下来我们一步一步的来完成驱动

1.创建控制器配置文件diego_controllers.yaml

在moveit assistant产生的配置文件目录的config子目录下,创建Diego 1# plus的配置文件diego1_controllers.yaml,此文件告诉moveit将与哪个的action通讯

diego1_controllers.yaml代码如下:

controller_list:
  - name: arm_controller
    action_ns: follow_joint_trajectory
    type: FollowJointTrajectory
    default: true
    joints:
      - arm_base_to_arm_round_joint_stevo0
      - shoulder_2_to_arm_joint_stevo1
      - big_arm_round_to_joint_stevo2
      - arm_joint_stevo2_to_arm_joint_stevo3
      - wrist_to_arm_joint_stevo4
      - arm_joint_stevo4_to_arm_joint_stevo5
  - name: gripper_controller
    action_ns: gripper_action
    type: GripperCommand
    default: true
    joints:
      - hand_to_grip_joint_stevo6
      - hand_to_grip_joint_stevo7

这里分别为机械臂和机械爪定义了两个控制器arm_controller和gripper_controller,告诉moveit机械臂通讯的topic是arm_controller/follow_joint_trajectory,机械爪通讯的topic是gripper_controller/gripper_action

2.机械臂控制的FollowController

我们扩展了ros_arduino_bridge的功能来实现FollowController,在ros_arduino_python目录下创建joints.py和diego1_follow_controller.py

2.1 joint.py

封装了关节舵机控制的代码,主要功能函数:

  • __init__类初始化代码
  • setCurrentPosition(self):通过调用/arduino/servo_write服务发送舵机的控制命令,把舵机的位置设置为self.position
  • setCurrentPosition(self, position):
    通过调用/arduino/servo_write服务发送舵机的控制命令,把舵机的位置设置为moveit给出的舵机位置
  • mapToServoPosition(self,position):将moveit给出的舵机位置,转换为机械臂舵机实际对应的位置,此功能需要标定后,才能准确控制机械臂的位置。
#!/usr/bin/env python

# Copyright (c) 2017-2018 diego. 
# All right reserved.
#

## @file joints.py the jonit clase support functions for joints.

## @brief Joints hold current values.
import roslib
import rospy
from ros_arduino_msgs.srv import *
from math import pi as PI, degrees, radians
class Joint:

    ## @brief Constructs a Joint instance.
    ##
    ## @param servoNum The servo id for this joint.
    ## 
    ## @param name The joint name.
    ## 
    ## @param name The servo control range.
    def __init__(self, name, servoNum, max, min, servo_max, servo_min, inverse):
        self.name = name
        self.servoNum=servoNum
        self.max=max
        self.min=min
        self.servo_max=servo_max
        self.servo_min=servo_min
        self.inverse=inverse

        self.position = 0.0
        self.velocity = 0.0

    ## @brief Set the current position.
    def setCurrentPosition(self):
        rospy.wait_for_service('/arduino/servo_write')
	try:
	        servo_write=rospy.ServiceProxy('/arduino/servo_write',ServoWrite)
	        servo_write(self.servoNum,self.position)
        except rospy.ServiceException, e:
            print "Service call failed: %s"%e   

    ## @brief Set the current position.
    ##
    ## @param position The current position. 
    def setCurrentPosition(self, position):
        rospy.wait_for_service('/arduino/servo_write')
	try:
		#if self.servoNum==2:
        	servo_write=rospy.ServiceProxy('/arduino/servo_write',ServoWrite)
        	self.mapToServoPosition(position)	        	
       		servo_write(self.servoNum,radians(self.position))

        except rospy.ServiceException, e:
            print "Service call failed: %s"%e
    
    ## @brief map to  Servo Position.
    ##
    ## @param position The current position.        
    def mapToServoPosition(self,position):
	    per=(position-self.min)/(self.max-self.min)
	    if not self.inverse:	    	
	    	self.position=self.servo_min+per*(self.servo_max-self.servo_min)
	    else:
	    	self.position=self.servo_max-per*(self.servo_max-self.servo_min)


2.2diego1_follow_controller.py

此类diego机械臂控制器类,接收moveit的控制命令,转换为关节舵机的实际控制指令,驱动机械臂的运动。

#!/usr/bin/env python

# Copyright (c) 2017-2018 williamar. 
# All right reserved.

import rospy, actionlib

from control_msgs.msg import FollowJointTrajectoryAction
from trajectory_msgs.msg import JointTrajectory
from diagnostic_msgs.msg import *
from math import pi as PI, degrees, radians
from joints import Joint

class FollowController:

    def __init__(self, name):
        self.name = name
        
        rospy.init_node(self.name)
        

        # rates
        self.rate = 10.0
        
        # Arm jonits
        self.arm_base_to_arm_round_joint_stevo0=Joint('arm_base_to_arm_round_joint_stevo0',0,1.5797,-1.5707,130,0,False)
        self.shoulder_2_to_arm_joint_stevo1=Joint('shoulder_2_to_arm_joint_stevo1',1,1.5707,-0.1899,115,45,False)
        self.big_arm_round_to_joint_stevo2=Joint('big_arm_round_to_joint_stevo2',2,2.5891,1,100,20,False)
        self.arm_joint_stevo2_to_arm_joint_stevo3=Joint('arm_joint_stevo2_to_arm_joint_stevo3',3,1.5707,-1.5707,130,0,False)
        self.wrist_to_arm_joint_stevo4=Joint('wrist_to_arm_joint_stevo4',4,1.5707,-1.5707,130,0,False)
        self.arm_joint_stevo4_to_arm_joint_stevo5=Joint('arm_joint_stevo4_to_arm_joint_stevo5',5,1.5707,-1.5707,130,0,True)
        
        
        
        self.joints=[self.arm_base_to_arm_round_joint_stevo0,
        self.shoulder_2_to_arm_joint_stevo1,
        self.big_arm_round_to_joint_stevo2,
        self.arm_joint_stevo2_to_arm_joint_stevo3,
        self.wrist_to_arm_joint_stevo4,
        self.arm_joint_stevo4_to_arm_joint_stevo5]
        
        # action server
        self.server = actionlib.SimpleActionServer('arm_controller/follow_joint_trajectory', FollowJointTrajectoryAction, execute_cb=self.actionCb, auto_start=False)
        self.server.start()
        rospy.spin()
        rospy.loginfo("Started FollowController")


    def actionCb(self, goal):
	print("****************actionCb*********************")    
        rospy.loginfo(self.name + ": Action goal recieved.")
        traj = goal.trajectory

        if set(self.joints) != set(traj.joint_names):
            for j in self.joints:
                if j.name not in traj.joint_names:
                    msg = "Trajectory joint names does not match action controlled joints." + str(traj.joint_names)
                    rospy.logerr(msg)
                    self.server.set_aborted(text=msg)
                    return
            rospy.logwarn("Extra joints in trajectory")

        if not traj.points:
            msg = "Trajectory empy."
            rospy.logerr(msg)
            self.server.set_aborted(text=msg)
            return

        try:
            indexes = [traj.joint_names.index(joint.name) for joint in self.joints]
        except ValueError as val:
            msg = "Trajectory invalid."
            rospy.logerr(msg)
            self.server.set_aborted(text=msg)
            return

        if self.executeTrajectory(traj):   
            self.server.set_succeeded()
        else:
            self.server.set_aborted(text="Execution failed.")

        rospy.loginfo(self.name + ": Done.")     

    def executeTrajectory(self, traj):
        rospy.loginfo("Executing trajectory")
        rospy.logdebug(traj)
        # carry out trajectory
        try:
            indexes = [traj.joint_names.index(joint.name) for joint in self.joints]
        except ValueError as val:
            rospy.logerr("Invalid joint in trajectory.")
            return False

        # get starting timestamp, MoveIt uses 0, need to fill in
        start = traj.header.stamp
        if start.secs == 0 and start.nsecs == 0:
            start = rospy.Time.now()

        r = rospy.Rate(self.rate)
	for point in traj.points:            
            desired = [ point.positions[k] for k in indexes ]
            for i in indexes:
                #self.joints[i].position=desired[i]
                self.joints[i].setCurrentPosition(desired[i])

            while rospy.Time.now() + rospy.Duration(0.01) < start:
                rospy.sleep(0.01)
        return True
    

if __name__=='__main__':
    try:
        rospy.loginfo("start followController...")
        FollowController('follow_Controller')
    except rospy.ROSInterruptException:
        rospy.loginfo('Failed to start followController...')
        


此类是驱动的核心代码,接下来我们详细解释其功能逻辑

    def __init__(self, name):
        self.name = name
        
        rospy.init_node(self.name)
        

        # rates
        self.rate = 10.0

初始化化ros note,设置节点名称,刷新频率为10hz

        # Arm jonits
        self.arm_base_to_arm_round_joint_stevo0=Joint('arm_base_to_arm_round_joint_stevo0',0,1.5797,-1.5707,130,0,False)
        self.shoulder_2_to_arm_joint_stevo1=Joint('shoulder_2_to_arm_joint_stevo1',1,1.5707,-0.1899,115,45,False)
        self.big_arm_round_to_joint_stevo2=Joint('big_arm_round_to_joint_stevo2',2,2.5891,1,100,20,False)
        self.arm_joint_stevo2_to_arm_joint_stevo3=Joint('arm_joint_stevo2_to_arm_joint_stevo3',3,1.5707,-1.5707,130,0,False)
        self.wrist_to_arm_joint_stevo4=Joint('wrist_to_arm_joint_stevo4',4,1.5707,-1.5707,130,0,False)
        self.arm_joint_stevo4_to_arm_joint_stevo5=Joint('arm_joint_stevo4_to_arm_joint_stevo5',5,1.5707,-1.5707,130,0,True)
        
        
        
        self.joints=[self.arm_base_to_arm_round_joint_stevo0,
        self.shoulder_2_to_arm_joint_stevo1,
        self.big_arm_round_to_joint_stevo2,
        self.arm_joint_stevo2_to_arm_joint_stevo3,
        self.wrist_to_arm_joint_stevo4,
        self.arm_joint_stevo4_to_arm_joint_stevo5]

初始化机械臂的关节,并把关节放入joints列表中,方便后续操作

        # action server
        self.server = actionlib.SimpleActionServer('arm_controller/follow_joint_trajectory', FollowJointTrajectoryAction, execute_cb=self.actionCb, auto_start=False)
        self.server.start()
        rospy.spin()
        rospy.loginfo("Started FollowController")

定义action server并启动。
指定通讯的topic为arm_controller/follow_joint_trajectory,类型为FollowJointTrajectoryAction,收到消息后的回调函数为self.actionCb,
self.actionCb的其实就是follow_controller对JointTrajectory msg的处理,所以这里先介绍一下JointTrajectory msg,只要理解了JointTrajectory msg,有助与后续代码的理解,执行如下命令就可以显示了JointTrajectory msg的结构。

$ rosmsg show JointTrajectory

可以看到消息的结构体中包含了三部分

  •  header  这是Ros的标准消息头这里就不多介绍了
  •  joint_names 这是所有关节名称的数组
  • JointTrajectoryPoint 这部分是驱动的关键,这个数组记录了机械臂从一种姿势到另外一种姿势所经过的路径点,moveit所产生的姿势路径是通过这些point点描述出来的,也就是我们驱动中要控制每个关节的舵机都按照这些point点进行运动,每个point又是由一个结构体构成:
  • positions这是一个float64的数组,记录每个point的时候舵机应该到达的角度,这里是弧度为单位的,比如说是6自由度的那每个Point的这个positions字段中应该包含六个数值[1.57,0,2,0.2,3,0.12],也就是我们舵机控制范围是180度,那这里面的取值范围就是0~π
  • velocities这个数组记录了每个关节运动的速度
  • accelerations这个数组记录每个关节运动的加速度
  • effort这个参数可以不用
  • time_from_start这个参数是指定从头部的timestamp开始算起多长时间要达到这个点的位置

理解了JointTrajectory msg的结构,我们很容易理解函数self.actionCb就是对消息做逻辑上的异常处理,判断msg中joint是否在joints列表中,消息是否为空,而真正消息体的处理是在executeTrajectory(self, traj),下面我们来看这个函数的处理逻辑

    def executeTrajectory(self, traj):
        rospy.loginfo("Executing trajectory")
        rospy.logdebug(traj)
        # carry out trajectory
        try:
            indexes = [traj.joint_names.index(joint.name) for joint in self.joints]
        except ValueError as val:
            rospy.logerr("Invalid joint in trajectory.")
            return False

将JointTrajectory msg中的Joint按照self.joints的顺序索引,并将索引放入indexes数组中。

    # get starting timestamp, MoveIt uses 0, need to fill in
        start = traj.header.stamp
        if start.secs == 0 and start.nsecs == 0:
            start = rospy.Time.now()

获取当前msg的时间戳

r = rospy.Rate(self.rate)

设定刷新频率

	for point in traj.points:            
            desired = [ point.positions[k] for k in indexes ]
            for i in indexes:
                #self.joints[i].position=desired[i]
                self.joints[i].setCurrentPosition(desired[i])

            while rospy.Time.now() + rospy.Duration(0.01) < start:
                rospy.sleep(0.01)

循环读取msg中的位置点,位置点中的舵机位置值读取出来,调用Joint类的setCurrentPosition方法发送舵机的控制指令

3. launch启动文件

moveit assistant会生成一个launch文件夹,其中包含了主要的launch文件,我们只需要做些接单的修改,即可进行测试。

3.1修改diego1_moveit_controller_manager.launch.xml

此文件是moveit assistant自动生成的,但其中内容是空的,我增加如下内容,告诉moveit,启动Controller的配置文件位置

<launch>
     <!-- Set the param that trajectory_execution_manager needs to find the controller plugin -->
     <arg name="moveit_controller_manager" default="moveit_simple_controller_manager/MoveItSimpleControllerManager" />
     <param name="moveit_controller_manager" value="$(arg moveit_controller_manager)"/>
     <!-- load controller_list -->
     <rosparam file="$(find diego_moveit_config)/config/diego1_controllers.yaml"/>
</launch>

3.2创建diego_demo.launch文件

在moveit的launch文件夹下创建diego_demo.launch文件,并添加如下内容

<launch>

     <master auto="start"/>

     <node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen">
         <rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" />
     </node>
     <!-- By default, we are not in debug mode -->
     <arg name="debug" default="false" />

     <!-- Load the URDF, SRDF and other .yaml configuration files on the param server -->
     <include file="$(find diego_moveit_config)/launch/planning_context.launch">
         <arg name="load_robot_description" value="true"/>
     </include>

     <node name="arduino_follow_controller" pkg="ros_arduino_python" type="diego1_follow_controller.py" output="screen">
     </node>

     <!-- If needed, broadcast static tf for robot root -->
     <node pkg="tf" type="static_transform_publisher" name="virtual_joint_broadcaster_0" args="0 0 0 0 0 0 world_frame base_link 100" />

     <!-- We do not have a robot connected, so publish fake joint states -->
     <node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher">
        <param name="/use_gui" value="false"/>
        <rosparam param="/source_list">[/move_group/fake_controller_joint_states]</rosparam>
     </node>

     <!-- Run the main MoveIt executable without trajectory execution (we do not have controllers configured by default) -->
     <include file="$(find diego_moveit_config)/launch/move_group.launch">
     </include>
</launch>

这里主要是作为一个测试使用,所以joint_states使用了假数据,在调用joint_state_publisher时我们把source list设置为/move_group/fake_controller_joint_states

现在我们可以执行如下代码来启动moveit

roslaunch diego_moveit_config diego_demo.launch

这时候我们可以通过控制台输入moveit命令来控制机械臂,我们也可以用代码来调用moveit接口来控制机械臂,下面我们来写一段测试代码来

4.moveit控制测试代码

在ros_arduino_python目录下创建diego_moveit_test.py,代码如下

#!/usr/bin/env python

import rospy, sys
import moveit_commander
from control_msgs.msg import GripperCommand

class MoveItDemo:
    def __init__(self):
        # Initialize the move_group API
        moveit_commander.roscpp_initialize(sys.argv)

        # Initialize the ROS node
        rospy.init_node('moveit_demo', anonymous=True)
        
 
        # Connect to the arm move group
        arm = moveit_commander.MoveGroupCommander('arm')
                       
        # Get the name of the end-effector link
        end_effector_link = arm.get_end_effector_link()
        
        # Display the name of the end_effector link
        rospy.loginfo("The end effector link is: " + str(end_effector_link))
        
        # Set a small tolerance on joint angles
        arm.set_goal_joint_tolerance(0.001)
        
        # Start the arm target in "resting" pose stored in the SRDF file
        arm.set_named_target('arm_default_pose')
        
        # Plan a trajectory to the goal configuration
        traj = arm.plan()
         
        # Execute the planned trajectory
        arm.execute(traj)
        
        # Pause for a moment
        rospy.sleep(1)         
        
        # Cleanly shut down MoveIt
        moveit_commander.roscpp_shutdown()
        
        # Exit the script
        moveit_commander.os._exit(0)

if __name__ == "__main__":
    try:
        MoveItDemo()
    except rospy.ROSInterruptException:
        pass

在测试代码中,我们连接到group arm,并命令机械臂运行到姿势 arm_default_pose,我们运行如下命令启动测试代码

    rosrun ros_arduino_python diego_moveit_test.py

启动后可以看到机械臂马上响应到arm_default_pose的位置。
 

Lesson 20:moveit-create the configuration package by moveit assistant

moveit提供了非常方便的配置生成工具moveis assistant,我们可以使用此工具,载入我们上一节中完成的机器人urdf模型进行配置。官方提供非常详细的教程http://docs.ros.org/kinetic/api/moveit_tutorials/html/

如果没有安装Moveit可以通过如下命令安装:

apt-get install ros-kinetic-moveit

1.启动moveis assistant

roslaunch moveit_setup_assistant setup_assistant.launch

2.载入机器人原型urdf

点击Create New MoveIt Configuration Package按钮,即可打开载入urdf的对话框,界面如下图,点击Browse按钮选择urdf文件,然后点击Load Files按钮载入。

载入后界面如下:

主要的功能见左边的菜单:

  • Self-Collisions:根据urdf的描述,进行碰撞检测计算。
  • Virtual Joints:为机器人定义一个虚拟节点,如何关联到世界坐标系。
  • Planning Groups:定义运动规划组,本质上是将一组关节放在一个组里面,后面可以定义这个关节组的不同的姿势。
  • Robot Poses:针对Planning Groups进行姿势预定义,后面可以针对。
  • End Effectors:末端执行器,在Diego中即机械爪。
  • Passive Joints:被动关节,意味着这些关节将不被规划。
  • Author Information:配置包的作者信息。
  • Configuration Files:生成配置文件。

 

配置时,需要按照左边菜单从上到下的功能顺序进行操作

3. Self-Collisions

这一步是不只需要设置Sampling Density采样密度,然后点击Regenerate Collision Matrix即可完成。Sampling Density设置的越高,需要的计算时间越长,理论上这样产生的数据也更加准确

计算过程如下:

计算完成后,在列表中会列出计算结果。

4.Virtual Joints

点击Add Virtual Joint添加Virtual Joints

在这里我们设置Child link为base_link,Parent Frame Name为odom,表示将base_link关联到odom坐标系,并且Joint Type选择为planer。

点击Save按钮保存设置

5.Planning Groups

5.1添加机械臂Planning Group

点击Add Group添加,如下图

在Group Name填写名称为arm,选择Kinematic Solver运动学算法为kdl

点击Add kin.Chain添加joint,如下图

在列表中选中arm_base,然后点击Choose Selected,设置group的Base link为arm_base.

在列表中选中arm_joint_stevo5,然后点击Choose Selected设置Tip Link为arm_joint_stevo5,Tip Link暨末端节点

点击Save按钮保存设置

5.2添加机械爪Planning Group

如前述,我们点击Add Group

Group Name为Hand,运动学算法选择kdl,然后点击Add Links

选择手部关键,点击中间>按钮,添加到hand group,添加后如下图,点击Save保存

最后,我们可以看到添加了两个group,arm和hand

6.Robot Poses

设置预定姿势,设置好的姿势可以通过moveit命令,或者在代码中调用,尤其适用于固定动作的重复执行。

点击Add Pose添加姿势

选择Planning Group为我们之前定义的arm,Pose Name可以自己随便填,右边的滑块按钮,可以调整每个关节的姿势,调整的时候,最右边的模型图会随着改变。设定好后点击Save保存

我们也可以为hand定义一个pose,如上图:

读者可以根据自己实际情况定义任意个pose。

定义完后在pose列表中可以看到所有已定义好的pose

7.End Effectors

点击Add End Effector添加末端控制器

End Effctor Group选择hand

Parent Link选择arm_joint_stevo5,暨与hand连接的关节

Parent Group选择arm

设置完后,点击save按钮保存

8. Passive Joints

Diego 1# plus中没有涉及到被动关节,不需要设置

9.Author Information

填写此配置的作者信息,此信息必须填写,否则后面生产的时候会报错

10.Configuration Files

选择生产配置包的位置后,点击Gernerate Package暨可生产配置包。

生成进度条完成后,会在对应目录下产生diego_moveit_config文件夹,暨配置包

 

Lesson 16:vision-face detection

Machine Vision is a very complex subject that requires a more professional knowledge of computer graphics, it provides a good introductory example in the ROS By Example Volume 1 book, so we’ll begin Diego’s machine vision tour, then we will adding more complex content by gradually.

We first come to realize  the face detection, Feature point acquisition, face tracking, robot follow, because the code used in this book depends on opencv2.4, but in Diego we installed the ROS Kinetic, Opencv3 is installed by default, so I have ported its source code so that it can be compiled in the Opencv3 environment. detail please see the diego_vision directory under github Diego1 plus.

1.About image format

ROS has its own image format, but the more popular are using opencv to image video processing, because it encapsulates a large number of advanced image algorithm, and it’s  open source, so the first need to ROS image format conversion to opencv image Format, no surprise ROS has provided cv_bridge, this package can do the conversion task, and support opencv3. If the installation is ros-kinetic-desktop-full version, the package has been installed, if there is no ros-kinetic-desktop-full need to be installed separately, it is recommended that we git clone source code installed, because if apt- Get installed, it will also install opencv2.4 version, which will cause some conflict. how to use  cv_bridge method, we can refer to the official example code.

2.ros2opencv3.py

The original ros2opencv2.py is not support opencv3.0, so I  transplanted it to support opencv3, and the file is renamed to ros2opencv3.py, class name is also changed to ROS2OpenCV3, the source code see GitHub, the following is the main function Function Description:

  • __init__:ROS2OpenCV3 initialization function, set the relevant parameters, initialize the ROS node, subscribe to RGB and RGBD message topics
  • on_mouse_click:Mouse events, the user can draw the ROI through the mouse in the video window
  • image_callback:The callback function of RGB image
  • depth_callback:The callback function of the depth image
  • convert_image:Converts RGB images from ROS format to opencv format
  • convert_depth_image:Convert the depth image from the ROS format to the opencv format
  • publish_roi:Publish the ROI (Region Of Interesting) message, which describes the area  we are interested in
  • process_image:RGB image processing function, there do not do any processing
  • process_depth_image:Depth image processing function, there do not do any processing
  • display_selection:Use the rectangle to display the area selected by the user on the video. Use a small dot to display the area where the user clicks on the image
  • is_rect_nonzero:Determine whether the rectangular area is valid
  • cvBox2D_to_cvRect:Convert the cvBox2D data structure to cvRect
  • cvRect_to_cvBox2D:Convert the cvRect data structure to cvBox2D

3.Face detection

3.1 the way for Face detection

The general step in face detection is

 

 

 

分类器暨Haar特征分类器,人脸的Haar特征分类器就是一个XML文件,该文件中会描述人脸的Haar特征值,opencv算法针对图片中的数据匹配分类器,从而识别出人脸,我们这里使用到三个分类器

the face of the Haar feature classifier is an XML file, the file will describe the face Haar eigenvalue, opencv will identify the face by matching classifier by it algorithm for the data in the picture, we will use three classifiers:

  • haarcascade_frontalface_alt2.xml
  • haarcascade_frontalface_alt.xml
  • haarcascade_profileface.xml

After reading the picture from the camera video, we first need to convert the image into a gray image, and then get the feature, last we call opencv detectMultiScale function to  detect the face

3.2 code

The code has been ported to opencv3, please see github

3.2.1face_detector.py

#!/usr/bin/env python

import rospy
import cv2
from ros2opencv3 import ROS2OpenCV3

class FaceDetector(ROS2OpenCV3):
    def __init__(self, node_name):
        super(FaceDetector, self).__init__(node_name)
                  
        # Get the paths to the cascade XML files for the Haar detectors.
        # These are set in the launch file.
        cascade_1 = rospy.get_param("~cascade_1", "")
        cascade_2 = rospy.get_param("~cascade_2", "")
        cascade_3 = rospy.get_param("~cascade_3", "")
        
        # Initialize the Haar detectors using the cascade files
        self.cascade_1 = cv2.CascadeClassifier(cascade_1)
        self.cascade_2 = cv2.CascadeClassifier(cascade_2)
        self.cascade_3 = cv2.CascadeClassifier(cascade_3)
        
        # Set cascade parameters that tend to work well for faces.
        # Can be overridden in launch file
        self.haar_scaleFactor = rospy.get_param("~haar_scaleFactor", 1.3)
        self.haar_minNeighbors = rospy.get_param("~haar_minNeighbors", 3)
        self.haar_minSize = rospy.get_param("~haar_minSize", 30)
        self.haar_maxSize = rospy.get_param("~haar_maxSize", 150)
        
        # Store all parameters together for passing to the detector
        self.haar_params = dict(scaleFactor = self.haar_scaleFactor,
                                minNeighbors = self.haar_minNeighbors,
                                flags = cv2.CASCADE_DO_CANNY_PRUNING,
                                minSize = (self.haar_minSize, self.haar_minSize),
                                maxSize = (self.haar_maxSize, self.haar_maxSize)
                                )
                        
        # Do we should text on the display?
        self.show_text = rospy.get_param("~show_text", True)
        
        # Intialize the detection box
        self.detect_box = None
        
        # Track the number of hits and misses
        self.hits = 0
        self.misses = 0
        self.hit_rate = 0

    def process_image(self, cv_image):
        # Create a greyscale version of the image
        grey = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY)
        
        # Equalize the histogram to reduce lighting effects
        grey = cv2.equalizeHist(grey)
            
        # Attempt to detect a face
        self.detect_box = self.detect_face(grey)
        
        # Did we find one?
        if self.detect_box is not None:
            self.hits += 1
        else:
            self.misses += 1
        
        # Keep tabs on the hit rate so far
        self.hit_rate = float(self.hits) / (self.hits + self.misses)
                    
        return cv_image


    def detect_face(self, input_image):
        # First check one of the frontal templates
        if self.cascade_1:
            faces = self.cascade_1.detectMultiScale(input_image, **self.haar_params)
                                         
        # If that fails, check the profile template
        if len(faces) == 0 and self.cascade_3:
            faces = self.cascade_3.detectMultiScale(input_image, **self.haar_params)

        # If that also fails, check a the other frontal template
        if len(faces) == 0 and self.cascade_2:
            faces = self.cascade_2.detectMultiScale(input_image, **self.haar_params)

        # The faces variable holds a list of face boxes.
        # If one or more faces are detected, return the first one.  
        if len(faces) > 0:
            face_box = faces[0]
        else:
            # If no faces were detected, print the "LOST FACE" message on the screen
            if self.show_text:
                font_face = cv2.FONT_HERSHEY_SIMPLEX
                font_scale = 0.5
                cv2.putText(self.marker_image, "LOST FACE!", 
                            (int(self.frame_size[0] * 0.65), int(self.frame_size[1] * 0.9)), 
                            font_face, font_scale, (255, 50, 50))
            face_box = None

        # Display the hit rate so far
        if self.show_text:
            font_face = cv2.FONT_HERSHEY_SIMPLEX
            font_scale = 0.5
            cv2.putText(self.marker_image, "Hit Rate: " + 
                        str(trunc(self.hit_rate, 2)), 
                        (20, int(self.frame_size[1] * 0.9)), 
                        font_face, font_scale, (255, 255, 0))
        
        return face_box
        
def trunc(f, n):
    '''Truncates/pads a float f to n decimal places without rounding'''
    slen = len('%.*f' % (n, f))
    return float(str(f)[:slen])
    
if __name__ == '__main__':
    try:
        node_name = "face_detector"
        FaceDetector(node_name)
        rospy.spin()
    except KeyboardInterrupt:
        print "Shutting down face detector node."
        cv2.destroyAllWindows()

3.2.2Code explanation

FaceDetector class inherited from ROS2OpenCV3, and rewrite the process_image to convert the the image to gray-scale, and do the face detection by detect_face function

load the classifier, cascade_1, cascade_2, cascade_3 ,defines the location of the three classifiers

        
        # Get the paths to the cascade XML files for the Haar detectors.
        # These are set in the launch file.
        cascade_1 = rospy.get_param("~cascade_1", "")
        cascade_2 = rospy.get_param("~cascade_2", "")
        cascade_3 = rospy.get_param("~cascade_3", "")
        
        # Initialize the Haar detectors using the cascade files
        self.cascade_1 = cv2.CascadeClassifier(cascade_1)
        self.cascade_2 = cv2.CascadeClassifier(cascade_2)
        self.cascade_3 = cv2.CascadeClassifier(cascade_3)

Image gray conversion in the process_image function

        # Create a greyscale version of the image
        grey = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY)
        
        # Equalize the histogram to reduce lighting effects
        grey = cv2.equalizeHist(grey)

Face detection in the detect_face function

        if self.cascade_1:
            faces = self.cascade_1.detectMultiScale(input_image, **self.haar_params)
                                         
        # If that fails, check the profile template
        if len(faces) == 0 and self.cascade_3:
            faces = self.cascade_3.detectMultiScale(input_image, **self.haar_params)

        # If that also fails, check a the other frontal template
        if len(faces) == 0 and self.cascade_2:
            faces = self.cascade_2.detectMultiScale(input_image, **self.haar_params)

Display information in the detect_face function

        # The faces variable holds a list of face boxes.
        # If one or more faces are detected, return the first one.  
        if len(faces) > 0:
            face_box = faces[0]
        else:
            # If no faces were detected, print the "LOST FACE" message on the screen
            if self.show_text:
                font_face = cv2.FONT_HERSHEY_SIMPLEX
                font_scale = 0.5
                cv2.putText(self.marker_image, "LOST FACE!", 
                            (int(self.frame_size[0] * 0.65), int(self.frame_size[1] * 0.9)), 
                            font_face, font_scale, (255, 50, 50))
            face_box = None

        # Display the hit rate so far
        if self.show_text:
            font_face = cv2.FONT_HERSHEY_SIMPLEX
            font_scale = 0.5
            cv2.putText(self.marker_image, "Hit Rate: " + 
                        str(trunc(self.hit_rate, 2)), 
                        (20, int(self.frame_size[1] * 0.9)), 
                        font_face, font_scale, (255, 255, 0))
        
        return face_box

The image_callback function of the parent class ROS2OpenCV3 will draw the face position  by a rectangular wireframe

4. start the face detection node

First make sure xtion was started, you can start with the following code:

roslaunch diego_vision openni_node.launch

please check face_detector.launch from github

<launch>
   <node pkg="diego_vision" name="face_detector" type="face_detector.py" output="screen">

   <remap from="input_rgb_image" to="/camera/rgb/image_color" />

   <rosparam>
       haar_scaleFactor: 1.3
       haar_minNeighbors: 3
       haar_minSize: 30
       haar_maxSize: 150
   </rosparam>

   <param name="cascade_1" value="$(find diego_vision)/data/haar_detectors/haarcascade_frontalface_alt2.xml" />
   <param name="cascade_2" value="$(find diego_vision)/data/haar_detectors/haarcascade_frontalface_alt.xml" />
   <param name="cascade_3" value="$(find diego_vision)/data/haar_detectors/haarcascade_profileface.xml" />

</node>
</launch>

start the face detector node

roslaunch diego_vision face_detector.launch

You will see a video form after the start the node, you can detect the camera to capture the face, and identified by a rectangle

Lesson 15:Diego1 Plus Introduction

From the beginning of this section due to the content involved in image processing, raspberry faction processing power has been powerless, after repeated testing, running some relatively simple video processing functions Raspberry faction 3b CPU usage must be 80 % Of the above, it can not meet the requirements of the follow-up function, it Diego1 hardware upgrade, I called Diego1 plus here,in the github corresponding code warehouse is diego1-plus

1.Hardware adjustment

The following table is Diego1 plus hardware adjustment program:

PhotoNameCountmapping diego 1#StatusFeaturediego 1# plus中的作用
Mini PC1RaspBerry PI3Replace•Intel I5 5200u
•4g DDR3 1600
•802.11 b/g/n 无线局域网
•蓝牙 4.1
•1 x 10/100 以太网端口
•1 x HDMI 视频/音频连接器
•4 个USB 3.0 端口
•2 个USB 2.0 端口
•尺寸:181 x 128x 55 mm
•主控制板
•Ubuntu 16.04
•ROS kinetic
•Other ROS package
buck0BuckDelete•大功率5A输出,低纹波
•板载USB输出
•恒压恒流两种可调模式
•过流保护
•板载电压、电流、功率显示
•为树莓派提供5v USB供电
12V 18650 battery212V 18650 batteryAdd 1 unit•标称电压 :12V
•放电电流 :6.0A
•充电电流 :0.5A
•内阻 :≤60mΩ
•最高充电电压 :12.6V
•最低放电电压 :9.0V
•一组为6自由度机械臂舵机供电
•一组为mini pc供电
Letv Pro Xtion1New•最大功耗:低于2.5瓦
•使用距离:0.8m~3.5m
•视野:58° H, 45° V, 70° D(水平,垂直,对角)
•传感器:深度
•深度影像大小:VGA(640*480):30fps; QVGA(320*240):60FPS
•接口:USB2.0
•作为机器人的视觉传感器,提供深度影像数据,点云数据

Adjustment instructions:

  • mini pc:Using intel low power processor I5 5200U, 4g memory, installed directly on the Diego1 plus base.
  • Power: As the mini pc with 12V power supply, so add a 12V 6A output 18650 battery to mini pc power supply, and the original 9v battery dedicated to the chassis motor power supply, coupled with the power supply to the arm of the battery, a total of three groups of 18650 battery.
  • Buck module: The 5V3A buck module, which was originally powered by raspberry, is no longer needed in Diego 1 plus.
  • Xtion pro: In the Diego1 plus new Xtion pro depth camera, Xtion pro visual range is 0.8m ~ 3.5m, so at the time of installation to ensure that Diego1 plus to operate within the scope of its visible range.

2.software adjustment

Software adjustment has two main aspects:

  • Operating System: Ubuntu mate 16.04 used by the original raspberry 3 to adjust to x64 architecture Ubuntu 16.04
  • ROS: In Diego1 because the raspberry-3 ros-kinetic-desktop adjusted to ros-kinetic-desktop-full, and full version of ROS

Even if you install the full version of ROS, we need to add the following package in order to ensure the normal operation of Diego1 plus

  • gmapping
  • amcl
  • move_base
  • hector slam
  • openni
  • tf2
  • sphinx相关包

Please execute the following code to complete the installation

sudo apt-get install ros-kinetic-gmapping
sudo apt-get install ros-kinetic-navigation
sudo apt-get install ros-kinetic-hector-slam
sudo apt-get install ros-kinetic-tf2-sensor-msgs
sudo apt-get install ros-kinetic-openni-camera
sudo apt-get install ros-kinetic-openni-launch
sudo apt-get install libopenni-sensor-primesense0
sudo apt-get install ros-kinetic-audio-common
sudo apt-get install libasound2
sudo apt-get install gstreamer0.10-*
sudo apt-get install python-gst0.10

Also related to the sphinx deb package to be replaced by amd64 version, the specific installation method please refer to Lesson12, download please select amd64 version of the package

Lesson 14: Keyboard control

In this section we show how to use the computer keyboard to control the robot chassis, and the movement of the robot arm in the ROS has teleop_twist_keyboard package to control the movement of the robot chassis, but the lack of control of the robot, we can on this basis Modify the part that controls the arm control.

please see http://wiki.ros.org/teleop_twist_keyboard to get more detail information for teleop_twist_keyboard

1.Keyboard control diagram:

2.Control principle

2.1Chassis motion control

As described in the previous section, the ROS machine chassis is controlled by receiving Twist messages, so we only need to convert the control commands of the corresponding keyboard to Twist messages to achieve motion control of the chassis

2.2Motion control of the manipulator

In arduino_node.py provides a servo servo control service

# A service to position a PWM

servo rospy.Service('~servo_write', ServoWrite, self.ServoWriteHandler)

Control the arm, in fact, is to control the arm of the various joints of the steering gear, so we only need to control the steering wheel command, call arduino_node.py serve can achieve the control of the robot

3. code

the following is code ,the file name is diego_teleop_twist_keyboard.py

  • Chassis motion control: retains the original teleop_twist_keyboard control logic
  • Robot control: Increases or decreases 5 ° control angle each time
#!/usr/bin/env python
import roslib
import rospy

from geometry_msgs.msg import Twist
from ros_arduino_msgs.srv import *
from math import pi as PI, degrees, radians
import sys, select, termios, tty

msg = """
Reading from the keyboard  and Publishing to Twist, Servo or sensor!
---------------------------
Moving around:
   u    i    o
   j    k    l

, : up (+z)
. : down (-z)

----------------------------
Left arm servo control:
+   1   2   3   4   5   6
-   q   w   e   r   t   y  
----------------------------
Right arm servo control:
+   a   s   d   f   g   h
-   z   x   c   v   b   n 

p : init the servo

CTRL-C to quit
"""

moveBindings = {
 'i':(1,0,0,0),
 'o':(1,0,0,1),
 'j':(-1,0,0,-1),
 'l':(-1,0,0,1),
 'u':(1,0,0,-1),
 'k':(-1,0,0,0),
 ',':(0,0,1,0),
 '.':(0,0,-1,0),
 }

armServos={
 '1':(0,1),
 '2':(1,1),
 '3':(2,1),
 '4':(3,1),
 '5':(4,1),
 '6':(5,1),
 'a':(6,1),
 'q':(0,0),
 'w':(1,0),
 'e':(2,0),
 'r':(3,0),
 't':(4,0),
 'y':(5,0),
 'z':(6,0),
 }

armServoValues=[70,100,60,80,70,70,35]

def getradians(angle):
    return PI*angle/180


def getKey():
    tty.setraw(sys.stdin.fileno())
    select.select([sys.stdin], [], [], 0)
    key = sys.stdin.read(1)
    termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)
    return key


def servoWrite(servoNum, value):
        rospy.wait_for_service('/arduino/servo_write')
    try:
        servo_write=rospy.ServiceProxy('/arduino/servo_write',ServoWrite)
        servo_write(servoNum,value)
        print servoNum
            print value
        except rospy.ServiceException, e:
            print "Service call failed: %s"%e


def vels(speed,turn):
    return "currently:\tspeed %s\tturn %s " % (speed,turn)

if __name__=="__main__":
        settings = termios.tcgetattr(sys.stdin)

    pub = rospy.Publisher('cmd_vel', Twist, queue_size = 1)
    rospy.init_node('teleop_twist_keyboard')

    speed = rospy.get_param("~speed", 0.25)
    turn = rospy.get_param("~turn", 1.0)
    x = 0
    y = 0
    z = 0
    th = 0
    status = 0  

    try:
        print msg
        print vels(speed,turn)
        servoWrite(0, radians(armServoValues[0]))
        servoWrite(1, radians(armServoValues[1]))
        servoWrite(2, radians(armServoValues[2]))
        servoWrite(3, radians(armServoValues[3]))
        servoWrite(4, radians(armServoValues[4]))
        servoWrite(5, radians(armServoValues[5]))
        servoWrite(6, radians(armServoValues[6]))
        while(1):
            key = getKey()
            print key
            if key in moveBindings.keys():
                x = moveBindings[key][0]
                y = moveBindings[key][1]
                z = moveBindings[key][2]
                th = moveBindings[key][3]
                twist = Twist()
                twist.linear.x = x*speed; twist.linear.y = y*speed; twist.linear.z = z*speed;
                twist.angular.x = 0; twist.angular.y = 0; twist.angular.z = th*turn
                pub.publish(twist)
            elif key in armServos.keys():  
                if(armServos[key][1]==0):
                    armServoValues[armServos[key][0]]=armServoValues[armServos[key][0]]-5
                    if armServoValues[armServos[key][0]]<=0:
                       armServoValues[armServos[key][0]]=0
                else:
                    armServoValues[armServos[key][0]]=armServoValues[armServos[key][0]]+5
                    if armServoValues[armServos[key][0]]>=180:
                       armServoValues[armServos[key][0]]=180
                print armServoValues[armServos[key][0]]
                servoWrite(armServos[key][0], radians(armServoValues[armServos[key][0]]))
            else:
                x = 0
                y = 0
                z = 0
                th = 0
                if (key == '\x03'):
                    break



    except BaseException,e:
        print e

    finally:
        twist = Twist()
        twist.linear.x = 0; twist.linear.y = 0; twist.linear.z = 0
        twist.angular.x = 0; twist.angular.y = 0; twist.angular.z = 0
        pub.publish(twist)

            termios.tcsetattr(sys.stdin, termios.TCSADRAIN, settings)

4.Start keyboard control

rosrun teleop_twist_keyboard diego_teleop_twist_keyboard.py

Lesson 13: Vision system-Xtion+Openni

Machine vision is currently generally used in depth camera, binocular camera and other solutions, but the more mature is the use of deep camera solutions. Depth camera is currently more popular are:

  • Microsoft Kinect
  • Intel RealSense
  • ASUS Xtion pro live

In Diego 1 # we used ASUS Xtion as a visual hardware solution, the main reason is that Xtion uses USB2.0 interface, the other two products are USB3.0 interface, while the raspberry faction only supports USB2.0 Interface; another Xtion relative volume is relatively small, no additional power supply.

Software package we use openni, openni is an open source 3D machine vision open source library, which provides a convenient and powerful API interface, and it is important to support Xtion pro, available in linux through openni to drive Xtion depth camera.

1.install OpenNI

1.1Install OpenNI

sudo apt-get install ros-kinetic-openni-camera
sudo apt-get install ros-kinetic-openni-launch

1.2.Install Xtion

sudo apt-get install libopenni-sensor-primesense0

1.3.start openni node

roslaunch openni_launch openni.launch

After the successful start, the terminal should display the following information

2.Converts Openni’s point cloud data to laser data

Openni available 3D point cloud data, and we can also transform 3D point cloud data into 2D laser data, can replace the laser radar。we will use the package pointcloud_to_laserscan(https://github.com/ros-perception/pointcloud_to_laserscan)to transform the data.

2.1.Install pointcloud_to_laserscan

clone the package to ~/catkin_ws/src

cd ~/catkin_ws/src
git clone https://github.com/ros-perception/pointcloud_to_laserscan

execute follow command to install ros-kinetic-tf2-sensor-msgs

sudo apt-get install ros-kinetic-tf2-sensor-msgs

compile

cd ~/catkin_ws/

catkin_make

modify sample_node.launch

<?xml version="1.0"?>

<launch>

<arg name="camera" default="camera" />

<!-- start sensor-->
<include file="$(find openni_launch)/launch/openni.launch">
<arg name="camera" default="$(arg camera)"/>
</include>

<!-- run pointcloud_to_laserscan node -->
<node pkg="pointcloud_to_laserscan" type="pointcloud_to_laserscan_node" name="pointcloud_to_laserscan">

<remap from="cloud_in" to="$(arg camera)/depth_registered/points_processed"/>
<remap from="scan" to="$(arg camera)/scan"/>
<rosparam>
target_frame: camera_link # Leave disabled to output scan in pointcloud frame
transform_tolerance: 0.01
min_height: 0.0
max_height: 1.0

angle_min: -1.5708 # -M_PI/2
angle_max: 1.5708 # M_PI/2
angle_increment: 0.087 # M_PI/360.0
scan_time: 0.3333
range_min: 0.45
range_max: 4.0
use_inf: true

# Concurrency level, affects number of pointclouds queued for processing and number of threads used
# 0 : Detect number of cores
# 1 : Single threaded
# 2->inf : Parallelism level
concurrency_level: 1
</rosparam>

</node>

</launch>

2.2.start sample node

roslaunch pointcloud_to_laserscan sample_node.launch

After the success of the start will appear the following information:

Execute the following command to view the laser data

rostopic echo /camera/scan


You can also use rviz to view laser data

rosrun rviz rviz


the depth of the picture

The above is the depth of the camera and its installation process and a simple conversion to laser data applications, the follow-up section will be based on the visual system to gradually develop other robot applications

Lesson 12: Voice system-sphinx

Robot voice system is divided into two parts of speech recognition and speech synthesis, ROS voice recognition package using sphinx, and voice synthesis using source_play

1.Speech Recognition

In the ROS voice recognition package using the sphinx, sphinx is not installed in ROS kinetic this version , you need to manually install.

the installation process is as follows:

1.1 installation

1.1.1First install the following dependent packages

sudo apt-get install ros-kinetic-audio-common
sudo apt-get install libasound2
sudo apt-get install gstreamer0.10-*
sudo apt-get install python-gst0.10

1.1.2.install libsphinxbase1

https://packages.debian.org/jessie/libsphinxbase1
Since Diego uses a raspberry platform, please download the armhf version
这里写图片描述
After the implementation of the download

sudo dpkg -i libsphinxbase1_0.8-6_amdhf.deb

1.1.3.install libpocketsphinx1

https://packages.debian.org/jessie/libpocketsphinx1
Also download armhf version, after the completion of the implementation of the download

sudo dpkg -i libpocketsphinx1_0.8-5_amdhf.deb

1.1.4.install gstreamer0.10-pocketsphinx

https://packages.debian.org/jessie/gstreamer0.10-pocketsphinx
Also download armhf version, after the completion of the implementation of the download

sudo dpkg -i gstreamer0.10-pocketsphinx_0.8-5_amdhf.deb

1.1.5.install pocketsphinx

Go to the working directory and clone the git directory

cd ~/catkin_ws/src
Git clone https://github.com/mikeferguson/pocketsphinx

1.1.6.Download English voice packs pocketsphinx-hmm-en-tidigits (0.8-5)

https://packages.debian.org/jessie/pocketsphinx-hmm-en-tidigits

In the package pocketsphinx below to build a model directory, store the voice model file

cd ~/catkin_ws/src/pocketsphinx
mkdir model

Will download a good voice file, after decompression, the model file under which all the files copied to ~ / catkin_ws / src / pocketsphinx / model under
这里写图片描述

1.2.创建launch启动脚本

Create a new launch folder in the ~ / catkin_ws / src / pocketsphinx directory and create the diego_voice_test.launch file

cd ~/catkin_ws/src/pocketsphinx
mkdir launch
vi diego_voice_test.launch

diego_voice_test.launch

<launch>

  <node name="recognizer" pkg="pocketsphinx" type="recognizer.py" output="screen">
    <param name="lm" value="$(find pocketsphinx)/model/lm/en/tidigits.DMP"/>
    <param name="dict" value="$(find pocketsphinx)/model/lm/en/tidigits.dic"/>
    <param name="hmm" value="$(find pocketsphinx)/model/hmm/en/tidigits"/>
  </node>

</launch>

1.3.Modify recognizer.py

In the def init (self): function to increase the hmm parameter read

def __init__(self):
        # Start node
        rospy.init_node("recognizer")

        self._device_name_param = "~mic_name"  # Find the name of your microphone by typing pacmd list-sources in the terminal
        self._lm_param = "~lm"
        self._dic_param = "~dict"
        self._hmm_param = "~hmm" 

        # Configure mics with gstreamer launch config
        if rospy.has_param(self._device_name_param):
            self.device_name = rospy.get_param(self._device_name_param)
            self.device_index = self.pulse_index_from_name(self.device_name)
            self.launch_config = "pulsesrc device=" + str(self.device_index)
            rospy.loginfo("Using: pulsesrc device=%s name=%s", self.device_index, self.device_name)
        elif rospy.has_param('~source'):
            # common sources: 'alsasrc'
            self.launch_config = rospy.get_param('~source')
        else:
            self.launch_config = 'gconfaudiosrc'

        rospy.loginfo("Launch config: %s", self.launch_config)

        self.launch_config += " ! audioconvert ! audioresample " \
                            + '! vader name=vad auto-threshold=true ' \
                            + '! pocketsphinx name=asr ! fakesink'

        # Configure ROS settings
        self.started = False
        rospy.on_shutdown(self.shutdown)
        self.pub = rospy.Publisher('~output', String)
        rospy.Service("~start", Empty, self.start)
        rospy.Service("~stop", Empty, self.stop)

        if rospy.has_param(self._lm_param) and rospy.has_param(self._dic_param):
            self.start_recognizer()
        else:
            rospy.logwarn("lm and dic parameters need to be set to start recognizer.")

在def start_recognizer(self):函数hmm参数的代码,如下

    def start_recognizer(self):
        rospy.loginfo("Starting recognizer... ")

        self.pipeline = gst.parse_launch(self.launch_config)
        self.asr = self.pipeline.get_by_name('asr')
        self.asr.connect('partial_result', self.asr_partial_result)
        self.asr.connect('result', self.asr_result)
        self.asr.set_property('configured', True)
        self.asr.set_property('dsratio', 1)

        # Configure language model
        if rospy.has_param(self._lm_param):
            lm = rospy.get_param(self._lm_param)
        else:
            rospy.logerr('Recognizer not started. Please specify a language model file.')
            return

        if rospy.has_param(self._dic_param):
            dic = rospy.get_param(self._dic_param)
        else:
            rospy.logerr('Recognizer not started. Please specify a dictionary.')
            return
   
        if rospy.has_param(self._hmm_param):
            hmm = rospy.get_param(self._hmm_param)
        else:
            rospy.logerr('Recognizer not started. Please specify a hmm.')
            return

        self.asr.set_property('lm', lm)
        self.asr.set_property('dict', dic)
        self.asr.set_property('hmm', hmm)       

        self.bus = self.pipeline.get_bus()
        self.bus.add_signal_watch()
        self.bus_id = self.bus.connect('message::application', self.application_message)
        self.pipeline.set_state(gst.STATE_PLAYING)
        self.started = True

1.4.start shpinx

roslaunch pocketsphinx diego_voicd_test.launch

Now you can talk to your robot, pay attention to the words in the voice model dictionary
这里写图片描述

Sphinx for a specific voice environment recognition is good, but once the environment changes, with different noise, the recognition rate will be significantly reduced, which is now the voice recognition technology is facing common problems

2. Speech synthesis

In the ROS has been integrated in a complete voice synthesis package source_play, only supports English voice synthesis, the implementation of the following order, you can test

rosrun sound_play soundplay_node.py
rosrun sound_play say.py "hi, i am diego."

In this section, the voice system has been built, in the subsequent chapters will use the voice system to build other applications.

Lesson 11: Navigation-Location & planner

In the case of existing maps, the need to allow the robot to locate the location of the map, which requires the use of ROS acml package to achieve, while the release of the target location through the move_base to do the path planning, bypass obstacles to reach destination.

1.Location

1.1 Write acml launch script

<launch>
<master auto="start"/>

<include file="$(find flashgo)/launch/lidar.launch" />

<node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen">
<rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" />
</node>

<node pkg="tf" type="static_transform_publisher" name="base_frame_2_laser_link" args="0.0 0.0 0.2 3.14 3.14 0 /base_link /laser 40"/>

<!-- Map server -->
<node name="map_server" pkg="map_server" type="map_server" args="$(find diego_nav)/maps/f4_gmapping.yaml" />

<!-- amcl node -->
<node pkg="amcl" type="amcl" name="amcl" output="screen">
<remap from="scan" to="scan"/>
<!-- Publish scans from best pose at a max of 10 Hz -->
<param name="initial_pose_x" value="0.0"/>
<param name="initial_pose_y" value="0.0"/>
<param name="initial_pose_a" value="0.0"/>
<param name="use_map_topic" value="true"/>
<param name="odom_model_type" value="diff"/>
<param name="odom_alpha5" value="0.1"/>
<param name="transform_tolerance" value="0.5" />
<param name="gui_publish_rate" value="10.0"/>
<param name="laser_max_beams" value="300"/>
<param name="min_particles" value="500"/>
<param name="max_particles" value="5000"/>
<param name="kld_err" value="0.1"/>
<param name="kld_z" value="0.99"/>
<param name="odom_alpha1" value="0.1"/>
<param name="odom_alpha2" value="0.1"/>
<!-- translation std dev, m -->
<param name="odom_alpha3" value="0.1"/>
<param name="odom_alpha4" value="0.1"/>
<param name="laser_z_hit" value="0.9"/>
<param name="laser_z_short" value="0.05"/>
<param name="laser_z_max" value="0.05"/>
<param name="laser_z_rand" value="0.5"/>
<param name="laser_sigma_hit" value="0.2"/>
<param name="laser_lambda_short" value="0.1"/>
<param name="laser_lambda_short" value="0.1"/>
<param name="laser_model_type" value="likelihood_field"/>
<!-- <param name="laser_model_type" value="beam"/> -->
<param name="laser_min_range" value="1"/>
<param name="laser_max_range" value="8"/>
<param name="laser_likelihood_max_dist" value="2.0"/>
<param name="update_min_d" value="0.2"/>
<param name="update_min_a" value="0.5"/>
<param name="resample_interval" value="1"/>
<param name="transform_tolerance" value="0.1"/>
<param name="recovery_alpha_slow" value="0.0"/>
<param name="recovery_alpha_fast" value="0.0"/>
</node>
</launch>

Here is the need to emphasize that the robot must be configured in fact the location, the robot casually put a position, amcl will not be positioned to the need to specify the initial location, the map is the origin of the map with gmapping / hector when the starting point, The file needs to set the initial position, we put the robot here in the starting position, the value is set to 0.0.

<param name="initial_pose_x" value="0.0"/>
<param name="initial_pose_y" value="0.0"/>

1.2 start acml node

roslaunch diego_nav diego_run_gmapping_amcl_flashgo.launch

Open a new terminal and start the keyboard control node

rosrun teleop_twist_keyboard teleop_twist_keyboard.py

At this time we can use the keyboard to control the robot to move up and open rviz to view the positioning

rosin rviz rviz

In this picture we can see a large number of red arrows and Monte Carlo positioning algorithm generated by the distribution of particles, in which the direction of the arrow is the direction of the robot movement

This picture can be seen in the chart more aggregates than the previous chart, from the actual observation of the case of high degree of polymerization more positioning effect.

2.route plan

Path planning and how to allow the robot to move from one location to another location, the way to avoid obstacles, is also the usual sense of the most intuitive navigation, in the ROS path planning is achieved through the move_base package.

2.1 Write the launch_base launch script

<launch>

<!-- move_base node -->
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
<rosparam file="$(find diego_nav)/config/costmap_common_params.yaml" command="load" ns="global_costmap" />
<rosparam file="$(find diego_nav)/config/costmap_common_params.yaml" command="load" ns="local_costmap" />
<rosparam file="$(find diego_nav)/config/local_costmap_params.yaml" command="load" />
<rosparam file="$(find diego_nav)/config/global_costmap_params.yaml" command="load" />
<rosparam file="$(find diego_nav)/config/base_local_planner_params.yaml" command="load" />
</node>

</launch>
The contents of the launch file can also be combined with the acml startup script in a file.
The configuration parameters of the move_base node are distributed in four configuration files:
  • costmap_common_params.yaml
  • global_costmap_params.yaml
  • local_costmap_params.yaml
  • base_local_planner_params.yaml

2.2 costmap_common_params.yaml

Cost map general configuration file

obstacle_range: 2.5  #Maximum obstacle detection range
raytrace_range: 3.0. #Detects the maximum range of free space
footprint: [[0.14, 0.14], [0.14, -0.14], [-0.14, 0.14], [-0.14, -0.14]] #The robot is rectangular and sets the area occupied by the machine in the coordinate system
inflation_radius: 0.55 #And the safety factor of the obstacle

observation_sources: laser_scan_sensor #Only focus on the data of the lidar

laser_scan_sensor: {sensor_frame: laser, data_type: LaserScan, topic: scan, marking: true, clearing: true} #Set the relevant parameters of the lidar

 

2.3 global_costmap_params.yaml

Global cost map configuration file

global_costmap: 
 global_frame: /map #the global cost map reference is /map
 robot_base_frame: base_link #base_frame is base_link
 update_frequency: #Specify the map update frequency
 static_map: true 5.0 #Use static maps and initialize them
 transform_tolerance: 0.8 #Set tf update tolerance of 0.8, can be more hardware to adjust the actual situation of this parameter, in the raspberry sent less than 0.8 will continue to report tf release timeout warning message

 

2.4 local_costmap_params.yaml

Local cost map configuration file

local_costmap:
 global_frame: odom #the local cost map reference is odom
 robot_base_frame: base_link #base_frame is base_link
 update_frequency: 5.0 #the map update frequency
 publish_frequency: 2.0 #Cost Map The frequency at which visual information is published
 static_map: false #The local cost map will continue to update the map, so here is set to false
 rolling_window: true #设Set the scroll window so that the robot is always in the center of the form
 width: 4.0 #the width of cost map
 height: 6.0 #the length of cost map
 resolution: 0.05 #Cost map resolution

 

2.5 base_local_planner_params.yaml

Local Planner configuration file

TrajectoryPlannerROS:
 max_vel_x: 0.45 #Maximum speed in the x-axis direction
 min_vel_x: 0.1 #xMinimum speed in the direction of the shaft
 max_vel_theta: 1.0 #Maximum angular velocity
 min_in_place_vel_theta: 0.4

 acc_lim_theta: 3.2 #Maximum angular acceleration
 acc_lim_x: 2.5 #Maximum acceleration in the x-axis direction
 acc_lim_y: 2.5 #Maximum acceleration in the y-axis direction

2.6start move_base node

roslaunch diego_nav diego_run_gmapping_amcl_flashgo.launch
Diego 1 # already has a positioning, navigation function, then we conducted a navigation test

3.Navigation test

Navigation can be placed in the indoor fixed position obstacles, using gmapping or hector to draw a map of obstacles, and then use the above method to start acml and move_base for positioning and navigation。

3.1Navigate the test code

We can modify the navigation test code, change the location to the location on the actual map

#!/usr/bin/env python
import roslib;
import rospy
import actionlib
from actionlib_msgs.msg import *
from geometry_msgs.msg import Pose, PoseWithCovarianceStamped, Point, Quaternion, Twist
from move_base_msgs.msg import MoveBaseAction, MoveBaseGoal
from random import sample
from math import pow, sqrt

class NavTest():
def __init__(self):
rospy.init_node('nav_test', anonymous=True)

rospy.on_shutdown(self.shutdown)

# How long in seconds should the robot pause at each location?
self.rest_time = rospy.get_param("~rest_time", 10)

# Are we running in the fake simulator?
self.fake_test = rospy.get_param("~fake_test", False)

# Goal state return values
goal_states = ['PENDING', 'ACTIVE', 'PREEMPTED',
'SUCCEEDED', 'ABORTED', 'REJECTED',
'PREEMPTING', 'RECALLING', 'RECALLED',
'LOST']

# Set up the goal locations. Poses are defined in the map frame.
# An easy way to find the pose coordinates is to point-and-click
# Nav Goals in RViz when running in the simulator.
# Pose coordinates are then displayed in the terminal
# that was used to launch RViz.
locations = dict()

locations['point_0'] = Pose(Point(0.0, 2.0, 0.000), Quaternion(0.000, 0.000, 0.223, 0.975))
locations['point_1'] = Pose(Point(0.5, 2.0, 0.000), Quaternion(0.000, 0.000, -0.670, 0.743))

# Publisher to manually control the robot (e.g. to stop it)
self.cmd_vel_pub = rospy.Publisher('cmd_vel', Twist)

# Subscribe to the move_base action server
self.move_base = actionlib.SimpleActionClient("move_base", MoveBaseAction)

rospy.loginfo("Waiting for move_base action server...")

# Wait 60 seconds for the action server to become available
self.move_base.wait_for_server(rospy.Duration(60))

rospy.loginfo("Connected to move base server")

# A variable to hold the initial pose of the robot to be set by
# the user in RViz
initial_pose = PoseWithCovarianceStamped()

# Variables to keep track of success rate, running time,
# and distance traveled
n_locations = len(locations)
n_goals = 0
n_successes = 0
i = n_locations
distance_traveled = 0
start_time = rospy.Time.now()
running_time = 0
location = ""
last_location = ""

# Get the initial pose from the user
#rospy.loginfo("*** Click the 2D Pose Estimate button in RViz to set the robot's initial pose...")
#rospy.wait_for_message('initialpose', PoseWithCovarianceStamped)
#self.last_location = Pose()
#rospy.Subscriber('initialpose', PoseWithCovarianceStamped, self.update_initial_pose)

# Make sure we have the initial pose
#while initial_pose.header.stamp == "":
# rospy.sleep(1)

rospy.loginfo("Starting navigation test")

# Begin the main loop and run through a sequence of locations
while not rospy.is_shutdown():
# If we've gone through the current sequence,
# start with a new random sequence
if i == n_locations:
i = 0
sequence = sample(locations, n_locations)
# Skip over first location if it is the same as
# the last location
if sequence[0] == last_location:
i = 1

# Get the next location in the current sequence
location = sequence[i]

# Keep track of the distance traveled.
# Use updated initial pose if available.
if initial_pose.header.stamp == "":
distance = sqrt(pow(locations[location].position.x -
locations[last_location].position.x, 2) +
pow(locations[location].position.y -
locations[last_location].position.y, 2))
else:
rospy.loginfo("Updating current pose.")
distance = sqrt(pow(locations[location].position.x -
initial_pose.pose.pose.position.x, 2) +
pow(locations[location].position.y -
initial_pose.pose.pose.position.y, 2))
initial_pose.header.stamp = ""

# Store the last location for distance calculations
last_location = location

# Increment the counters
i += 1
n_goals += 1

# Set up the next goal location
self.goal = MoveBaseGoal()
self.goal.target_pose.pose = locations[location]
self.goal.target_pose.header.frame_id = 'map'
self.goal.target_pose.header.stamp = rospy.Time.now()

# Let the user know where the robot is going next
rospy.loginfo("Going to: " + str(location))

# Start the robot toward the next location
self.move_base.send_goal(self.goal)

# Allow 5 minutes to get there
finished_within_time = self.move_base.wait_for_result(rospy.Duration(300))

# Check for success or failure
if not finished_within_time:
self.move_base.cancel_goal()
rospy.loginfo("Timed out achieving goal")
else:
state = self.move_base.get_state()
if state == GoalStatus.SUCCEEDED:
rospy.loginfo("Goal succeeded!")
n_successes += 1
distance_traveled += distance
rospy.loginfo("State:" + str(state))
else:
rospy.loginfo("Goal failed with error code: " + str(goal_states[state]))

# How long have we been running?
running_time = rospy.Time.now() - start_time
running_time = running_time.secs / 60.0

# Print a summary success/failure, distance traveled and time elapsed
rospy.loginfo("Success so far: " + str(n_successes) + "/" +
str(n_goals) + " = " +
str(100 * n_successes/n_goals) + "%")
rospy.loginfo("Running time: " + str(trunc(running_time, 1)) +
" min Distance: " + str(trunc(distance_traveled, 1)) + " m")
rospy.sleep(self.rest_time)

def update_initial_pose(self, initial_pose):
self.initial_pose = initial_pose

def shutdown(self):
rospy.loginfo("Stopping the robot...")
self.move_base.cancel_goal()
rospy.sleep(2)
self.cmd_vel_pub.publish(Twist())
rospy.sleep(1)

def trunc(f, n):
# Truncates/pads a float f to n decimal places without rounding
slen = len('%.*f' % (n, f))
return float(str(f)[:slen])

if __name__ == '__main__':
try:
NavTest()
rospy.spin()
except rospy.ROSInterruptException:
rospy.loginfo("AMCL navigation test finished.")

3.2Start navigation

rosrun diego_nav nav_test.py

After the start, the robot will move in accordance with the location set in the navigation test file one by one, and can avoid obstacles in the map, but if it is in a small space, the machine will keep spinning, and no way out, this may Set the parameters related to the need to achieve good results also need to constantly debug parameters.

Lesson 10: Navigation-Create map based on the laser

Construction of the map is the basis of SLAM, positioning and path planning are based on a certain map to achieve, this section we will be based on the laser radar, using gmapping and hector slam two packages to build the map

1.Lidar

Lidar has a very high measurement accuracy characteristics, is a very good choice of distance measurement, but at the same time he also has high power consumption, high cost shortcomings, the current widespread use of 2D laser radar, intelligent measurement of a plane around the distance, and The cost of 3D laser radar is high and can not be widely promoted at present. 2D laser radar can only measure a plane problem, but also affect the use of the scene, if used as navigation, only suitable for some relatively simple rules in the environment.

In the Diego robot, we use the Flash go F4 radar from EAI, which features the following:

• 360 degree omni-directional scanning, 10 Hz adaptive scanning frequency, 4500 laser strokes per second
• Not less than 8 m range of ranging, measuring the resolution of 1% of the range
• Low noise and long life
Class 1 laser safety standard

F4 using USB interface and the host connection, at the same time with power and data transmission function, so the installation is still very convenient, only need to plug in the host’s USB interface

1.1. The installation of the laser radar ROS driver package

Execute the following command

cd ~/catkin_ws/src
git clone https://github.com/EAIBOT/flashgo.git
cd ..
catkin_make

After the completion of the implementation of the src directory to add a flashgo directory, is the F4 ROS package

 

 

 

 

 

 

 

 

 

1.2. Modify the serial number in the configuration file

F4 USB and motherboard connection, and raspberry connection, the system will appear in a serial port, because Arduino UNO is also connected through the USB serial communication, so there will be two serial devices in the system, the configuration must be clearly distinguish The serial number of the two devices


As shown above ttyACM0, and ttyACM1 is the corresponding serial port number, in the diego1 # ttyACM1 corresponds to arduino, ttyACM0 corresponding F4 laser radar, we need to modify the corresponding parameters, first modify the Ardunio configuration file, open the config directory my_arduino_params.yaml

 

 

 

 

 

Modify the arduino_bridge connection to the serial port for ttyACM1

 

 

 

 

 

 

Modify the F4 configuration file, open the flashar / launch directory under the lidar.launch file

 

 

 

 

 

Modify the F4 connection to the serial port ttyACM0

 

 

 

 

 

 

2.Use hector slam to build a map

2.1Install hector slam

sudo apt-get install ros-kinetic-hector-slam

2.2Write the hector slam to start the launch file

The following code is written hector start launch the file, here need to pay attention to the point of the EAI Flash lidar using the left hand coordinate system, and ROS use the right hand coordinate system, so the need to prepare a file in the static tf type base_frame_2_laser coordinate conversion.

<?xml version="1.0"?>

<launch>
  <arg name="tf_map_scanmatch_transform_frame_name" default="/scanmatcher_frame"/>
  <arg name="pub_map_odom_transform" value="true"/> 
  <arg name="map_frame" value="map"/> 
  <arg name="base_frame" value="base_link"/> 
  <arg name="odom_frame" value="base_link"/>
  <arg name="scan_subscriber_queue_size" default="5"/>
  <arg name="scan_topic" default="scan"/>
  <arg name="map_size" default="2048"/>

  <master auto="start"/>

  <include file="$(find flashgo)/launch/lidar.launch" />

  <node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen">
      <rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" />
  </node> 

  <node pkg="tf" type="static_transform_publisher" name="base_frame_2_laser" args="0 0 0 0 0 0 /$(arg base_frame) /laser 100"/>  
  <node pkg="tf" type="static_transform_publisher" name="map_2_odom" args="0.0 0.0 0.0 0 0 0.0 /odom /$(arg base_frame) 10"/>


  <node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen">   
  <!-- Frame names -->
  <param name="map_frame" value="$(arg map_frame)" />
  <param name="base_frame" value="$(arg base_frame)" />
  <param name="odom_frame" value="$(arg base_frame)" />   
   <!-- Tf use -->
  <param name="use_tf_scan_transformation" value="true"/>
  <param name="use_tf_pose_start_estimate" value="false"/>
  <param name="pub_map_odom_transform" value="$(arg pub_map_odom_transform)"/> 
  <!-- Map size / start point -->
  <param name="map_resolution" value="0.050"/>
  <param name="map_size" value="$(arg map_size)"/>
  <param name="map_start_x" value="0.5"/>
  <param name="map_start_y" value="0.5" />
  <param name="map_multi_res_levels" value="2" />
  <!-- Map update parameters -->
  <param name="update_factor_free" value="0.4"/>
  <param name="update_factor_occupied" value="0.7" />   
  <param name="map_update_distance_thresh" value="0.2"/>
  <param name="map_update_angle_thresh" value="0.9" />
  <param name="laser_z_min_value" value = "-1.0" />
  <param name="laser_z_max_value" value = "1.0" />  
  <!-- Advertising config -->
  <param name="advertise_map_service" value="true"/>
  <param name="scan_subscriber_queue_size" value="$(arg scan_subscriber_queue_size)"/>
  <param name="scan_topic" value="$(arg scan_topic)"/>
  <param name="tf_map_scanmatch_transform_frame_name" value="$(arg tf_map_scanmatch_transform_frame_name)" />
  </node>

  <include file="$(find hector_geotiff)/launch/geotiff_mapper.launch"/> 

</launch>

2.3 start hector slam

Now you can execute the following code to start the lidar

roslaunch diego_nav diego_run_hector_flashgo.launch

At this time we can control through the keyboard diego # in the room movement, start rviz can draw the corresponding map, in different terminals in the implementation of the following order
rosrun teleop_twist_keyboard teleop_twist_keyboard.py
rosrun rviz rviz

3.Use gmapping to map

3.1Install the ros navigation package

sudo apt-get install ros-kinetic-navigation

gmapping,acml,move_base will be installed after the implementation

3.2Write the gmapping package to launch the launch file

<launch>
<master auto="start"/>

<include file="$(find flashgo)/launch/lidar.launch" />

<node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen">
<rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" />
</node>

<node pkg="tf" type="static_transform_publisher" name="base_frame_2_laser_link" args="0.0 0.0 0.2 3.14 3.14 0 /base_link /laser 40"/>

<!-- gmapping node -->
<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping">
<param name="base_frame" value="base_link"/>
<param name="odom_frame" value="odom"/>
<param name="maxUrange" value="4.0"/>
<param name="maxRange" value="5.0"/>
<param name="sigma" value="0.05"/>
<param name="kernelSize" value="3"/>
<param name="lstep" value="0.05"/>
<param name="astep" value="0.05"/>
<param name="iterations" value="5"/>
<param name="lsigma" value="0.075"/>
<param name="ogain" value="3.0"/>
<param name="lskip" value="0"/>
<param name="minimumScore" value="30"/>
<param name="srr" value="0.01"/>
<param name="srt" value="0.02"/>
<param name="str" value="0.01"/>
<param name="stt" value="0.02"/>
<param name="linearUpdate" value="0.05"/>
<param name="angularUpdate" value="0.0436"/>
<param name="temporalUpdate" value="-1.0"/>
<param name="resampleThreshold" value="0.5"/>
<param name="particles" value="8"/>
<!--
<param name="xmin" value="-50.0"/>
<param name="ymin" value="-50.0"/>
<param name="xmax" value="50.0"/>
<param name="ymax" value="50.0"/>
make the starting size small for the benefit of the Android client's memory...
-->
<param name="xmin" value="-1.0"/>
<param name="ymin" value="-1.0"/>
<param name="xmax" value="1.0"/>
<param name="ymax" value="1.0"/>

<param name="delta" value="0.05"/>
<param name="llsamplerange" value="0.01"/>
<param name="llsamplestep" value="0.01"/>
<param name="lasamplerange" value="0.005"/>
<param name="lasamplestep" value="0.005"/>
</node>
</launch>

3.3 start gmapping

Now you can execute the following code to start the lidar

roslaunch diego_nav diego_run_gmapping_flashgo.launch

At this time we can control through the keyboard diego # in the room movement, start rviz can draw the corresponding map, in different terminals in the implementation of the following order
rosrun teleop_twist_keyboard teleop_twist_keyboard.py
rosrun rviz rviz

4.Generate map files

Whether you use hector or gmapping to create a map dynamically, we can generate a map file by command

4.1.Create the maps folder

The map file needs to give it permission to 777, allowing map_server to generate a map file in this folder

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4.2.Generate map files

Open the terminal cd into the maps directory and execute the map generation command

cd ~/catkin_ws/src/diego_nav/maps/

rosrun map_server map_saver -f f4_gmapping

 

Posts navigation

1 2
Scroll to top