Lesson 12: 语音系统-sphinx

机器人的声音系统分为语音识别和语音合成两部分,ROS中语音识别包使用的是sphinx,而语音合成使用的是source_play

1.语音识别

在ROS中语音识别包使用的是sphinx,在ROS kinetic这个版本中是没有安装sphinx的,需要手动安装,安装过程如下:

1.1 安装

1.1.1首先安装如下依赖包

sudo apt-get install ros-kinetic-audio-common
sudo apt-get install libasound2
sudo apt-get install gstreamer0.10-*
sudo apt-get install python-gst0.10

1.1.2.安装libsphinxbase1
https://packages.debian.org/jessie/libsphinxbase1
由于Diego使用的是树莓派平台,所以请下载armhf版本的
这里写图片描述
下载完后执行

sudo dpkg -i libsphinxbase1_0.8-6_amdhf.deb

1.1.3.安装libpocketsphinx1
https://packages.debian.org/jessie/libpocketsphinx1
也下载armhf版本,下载完成后后执行

sudo dpkg -i libpocketsphinx1_0.8-5_amdhf.deb

1.1.4.安装gstreamer0.10-pocketsphinx
https://packages.debian.org/jessie/gstreamer0.10-pocketsphinx
同样下载armhf版本,下载完后执行

sudo dpkg -i gstreamer0.10-pocketsphinx_0.8-5_amdhf.deb

1.1.5.安装pocketsphinx
进入工作目录,克隆git目录

cd ~/catkin_ws/src
Git clone https://github.com/mikeferguson/pocketsphinx

1.1.6.下载英文语音包pocketsphinx-hmm-en-tidigits (0.8-5)
https://packages.debian.org/jessie/pocketsphinx-hmm-en-tidigits

在包pocketsphinx下面建一个model目录,存放语音模型文件

cd ~/catkin_ws/src/pocketsphinx
mkdir model

将下载好的语音文件,解压后,将其中的model文件下的所有文件拷贝到~/catkin_ws/src/pocketsphinx/model下
这里写图片描述
1.2.创建launch启动脚本

在~/catkin_ws/src/pocketsphinx目录下新建launch文件夹,创建diego_voice_test.launch文件

cd ~/catkin_ws/src/pocketsphinx
mkdir launch
vi diego_voice_test.launch

diego_voice_test.launch文件内容如下

<launch>

  <node name="recognizer" pkg="pocketsphinx" type="recognizer.py" output="screen">
    <param name="lm" value="$(find pocketsphinx)/model/lm/en/tidigits.DMP"/>
    <param name="dict" value="$(find pocketsphinx)/model/lm/en/tidigits.dic"/>
    <param name="hmm" value="$(find pocketsphinx)/model/hmm/en/tidigits"/>
  </node>

</launch>

1.3.修改recognizer.py文件

在def init(self):函数中增加hmm参数的读取

def __init__(self):
        # Start node
        rospy.init_node("recognizer")

        self._device_name_param = "~mic_name"  # Find the name of your microphone by typing pacmd list-sources in the terminal
        self._lm_param = "~lm"
        self._dic_param = "~dict"
        self._hmm_param = "~hmm" #增加hmm参数

        # Configure mics with gstreamer launch config
        if rospy.has_param(self._device_name_param):
            self.device_name = rospy.get_param(self._device_name_param)
            self.device_index = self.pulse_index_from_name(self.device_name)
            self.launch_config = "pulsesrc device=" + str(self.device_index)
            rospy.loginfo("Using: pulsesrc device=%s name=%s", self.device_index, self.device_name)
        elif rospy.has_param('~source'):
            # common sources: 'alsasrc'
            self.launch_config = rospy.get_param('~source')
        else:
            self.launch_config = 'gconfaudiosrc'

        rospy.loginfo("Launch config: %s", self.launch_config)

        self.launch_config += " ! audioconvert ! audioresample " \
                            + '! vader name=vad auto-threshold=true ' \
                            + '! pocketsphinx name=asr ! fakesink'

        # Configure ROS settings
        self.started = False
        rospy.on_shutdown(self.shutdown)
        self.pub = rospy.Publisher('~output', String)
        rospy.Service("~start", Empty, self.start)
        rospy.Service("~stop", Empty, self.stop)

        if rospy.has_param(self._lm_param) and rospy.has_param(self._dic_param):
            self.start_recognizer()
        else:
            rospy.logwarn("lm and dic parameters need to be set to start recognizer.")

在def start_recognizer(self):函数hmm参数的代码,如下

    def start_recognizer(self):
        rospy.loginfo("Starting recognizer... ")

        self.pipeline = gst.parse_launch(self.launch_config)
        self.asr = self.pipeline.get_by_name('asr')
        self.asr.connect('partial_result', self.asr_partial_result)
        self.asr.connect('result', self.asr_result)
        self.asr.set_property('configured', True)
        self.asr.set_property('dsratio', 1)

        # Configure language model
        if rospy.has_param(self._lm_param):
            lm = rospy.get_param(self._lm_param)
        else:
            rospy.logerr('Recognizer not started. Please specify a language model file.')
            return

        if rospy.has_param(self._dic_param):
            dic = rospy.get_param(self._dic_param)
        else:
            rospy.logerr('Recognizer not started. Please specify a dictionary.')
            return
        #读取hmm属性,从配置文件中
        if rospy.has_param(self._hmm_param):
            hmm = rospy.get_param(self._hmm_param)
        else:
            rospy.logerr('Recognizer not started. Please specify a hmm.')
            return

        self.asr.set_property('lm', lm)
        self.asr.set_property('dict', dic)
        self.asr.set_property('hmm', hmm)       #设置hmm属性 

        self.bus = self.pipeline.get_bus()
        self.bus.add_signal_watch()
        self.bus_id = self.bus.connect('message::application', self.application_message)
        self.pipeline.set_state(gst.STATE_PLAYING)
        self.started = True

1.4.启动shpinx

roslaunch pocketsphinx diego_voicd_test.launch

现在可以对着你的机器人说话了,注意要说语音模型字典中的单词
这里写图片描述

sphinx对于特定的语音环境识别还是不错的,但是一旦环境发生变化,有了不同的噪音,识别率会显著降低,这也是现在语音识别技术所面临的共同难题

2. 语音合成

在ROS中已经集成了完整的语音合成包source_play,只支持英文的语音合成,执行如下命令,即可测试

rosrun sound_play soundplay_node.py
rosrun sound_play say.py "hi, i am diego."

在本节中,语音系统已经搭建完成,在后续的章节中会利用语音系统搭建其他应用。

Lesson 11: 导航-定位 & 路径规划

在已有地图的情况下,需要让机器人能够在地图中定位的自己的位置,这就需要时用到ROS的acml包来实现,同时发布目标位置通过move_base来做路径规划,绕过障碍物到达目的地。

1.定位

1.1 编写acml的launch启动脚本

<launch>
<master auto="start"/>

<include file="$(find rplidar_ros)/launch/rplidar.launch" />

<node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen">
<rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" />
</node>

<node pkg="tf" type="static_transform_publisher" name="base_frame_2_laser_link" args="-0.025 0.02 0.095 0.0 0.0 0 /base_link /laser 50"/>

<!-- Map server -->
<node name="map_server" pkg="map_server" type="map_server" args="$(find diego_nav)/maps/rplidar_gmapping.yaml" />

<!-- amcl node -->
<node pkg="amcl" type="amcl" name="amcl" output="screen">
<remap from="scan" to="scan"/>
<!-- Publish scans from best pose at a max of 10 Hz -->
<param name="initial_pose_x" value="0.0"/>
<param name="initial_pose_y" value="0.0"/>
<param name="initial_pose_a" value="0.0"/>
<param name="use_map_topic" value="true"/>
<param name="odom_model_type" value="diff"/>
<param name="odom_alpha5" value="0.1"/>
<param name="transform_tolerance" value="0.5" />
<param name="gui_publish_rate" value="10.0"/>
<param name="laser_max_beams" value="300"/>
<param name="min_particles" value="500"/>
<param name="max_particles" value="5000"/>
<param name="kld_err" value="0.1"/>
<param name="kld_z" value="0.99"/>
<param name="odom_alpha1" value="0.1"/>
<param name="odom_alpha2" value="0.1"/>
<!-- translation std dev, m -->
<param name="odom_alpha3" value="0.1"/>
<param name="odom_alpha4" value="0.1"/>
<param name="laser_z_hit" value="0.9"/>
<param name="laser_z_short" value="0.05"/>
<param name="laser_z_max" value="0.05"/>
<param name="laser_z_rand" value="0.5"/>
<param name="laser_sigma_hit" value="0.2"/>
<param name="laser_lambda_short" value="0.1"/>
<param name="laser_lambda_short" value="0.1"/>
<param name="laser_model_type" value="likelihood_field"/>
<!-- <param name="laser_model_type" value="beam"/> -->
<param name="laser_min_range" value="1"/>
<param name="laser_max_range" value="8"/>
<param name="laser_likelihood_max_dist" value="2.0"/>
<param name="update_min_d" value="0.2"/>
<param name="update_min_a" value="0.5"/>
<param name="resample_interval" value="1"/>
<param name="transform_tolerance" value="0.1"/>
<param name="recovery_alpha_slow" value="0.0"/>
<param name="recovery_alpha_fast" value="0.0"/>
</node>
</launch>

这里需要强调的一点是必须配置机器人的其实位置点 ,机器人随便放一个位置,amcl并不会定位到,需要指定初始位置,地图的原点是用gmapping/hector构建地图时候的起始点,所以在launch文件中需要设定初始位置,我们在这里就把机器人放在起始的位置,的值都设定为0.0。

<param name="initial_pose_x" value="0.0"/>
<param name="initial_pose_y" value="0.0"/>

1.2 启动acml节点

roslaunch diego_nav diego_run_gmapping_amcl_flashgo.launch

打开新的终端,启动键盘控制节点

rosrun teleop_twist_keyboard teleop_twist_keyboard.py

这时我们可以用键盘控制机器人动起来,并打开rviz查看定位情况

rosin rviz rviz

这张图中我们可以看到大量的红色箭头暨蒙特卡洛定位算法所产生的粒子分布,其中箭头方向是机器人运动的方向

这张图中可以看粒子比上张图更加聚合,从实际观察来看聚合度高的情况下定位效果更加。

2.路径规划

路径规划暨如何让机器人能够自主的从一个位置移动另外一个位置,途中可以自主避开障碍物,也是通常意义上导航的最直观体现,在ROS中路径规划是通过move_base这个包来实现的。

2.1 编写move _base的launch启动脚本

<launch>

<!-- move_base node -->
<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
<rosparam file="$(find diego_nav)/config/costmap_common_params.yaml" command="load" ns="global_costmap" />
<rosparam file="$(find diego_nav)/config/costmap_common_params.yaml" command="load" ns="local_costmap" />
<rosparam file="$(find diego_nav)/config/local_costmap_params.yaml" command="load" />
<rosparam file="$(find diego_nav)/config/global_costmap_params.yaml" command="load" />
<rosparam file="$(find diego_nav)/config/base_local_planner_params.yaml" command="load" />
</node>

</launch>
以上launch文件内容也可以与acml的启动脚本合并在一个文件中。
move_base节点的配置参数分布在四个配置文件:
  • costmap_common_params.yaml
  • global_costmap_params.yaml
  • local_costmap_params.yaml
  • base_local_planner_params.yaml

2.2 costmap_common_params.yaml

代价地图通用配置文件

obstacle_range: 2.5  #最大障碍物检测范围
raytrace_range: 3.0. #检测自由空间的最大范围
footprint: [[0.14, 0.14], [0.14, -0.14], [-0.14, 0.14], [-0.14, -0.14]] #机器人为矩形,设置机器的在坐标系内所占用的面积
inflation_radius: 0.55 #与障碍物的安全系数

observation_sources: laser_scan_sensor #只关注激光雷达的数据

laser_scan_sensor: {sensor_frame: laser, data_type: LaserScan, topic: scan, marking: true, clearing: true} #设定激光雷达的相关参数

 

2.3 global_costmap_params.yaml

全局代价地图配置文件

global_costmap: 
 global_frame: /map #指定全局代价地图参考系为/map
 robot_base_frame: base_link #指定机器人的base_frame为base_link
 update_frequency: 5.0#指定地图更新频率
 static_map: true #指定使用静态地图,并初始化
 transform_tolerance: 0.8 #指定tf的更新容忍度为0.8,可以更加硬件的实际情况调整这个参数,在树莓派下小于0.8会不断的报tf发布超时的警告信息

 

2.4 local_costmap_params.yaml

本地代价地图配置文件

local_costmap:
 global_frame: odom #指定本地代价地图参考系为odom
 robot_base_frame: base_link #指定机器人的base_frame为base_link
 update_frequency: 5.0 #指定地图更新频率
 publish_frequency: 2.0 #代价地图发布可视化信息的频率
 static_map: false #本地代价地图会不断的更新地图,所以这里设置为false
 rolling_window: true #设置滚动窗口,使得机器人始终在窗体中心位置
 width: 4.0 #代价地图的宽度
 height: 6.0 #代价地图的长度
 resolution: 0.05 #代价地图的分辨率

 

2.5 base_local_planner_params.yaml

本地规划器配置文件

TrajectoryPlannerROS:
 max_vel_x: 0.45 #x轴方向最大速度
 min_vel_x: 0.1 #x轴方向最小速度
 max_vel_theta: 1.0 #最大角速度
 min_in_place_vel_theta: 0.4

 acc_lim_theta: 3.2 #最大角加速度
 acc_lim_x: 2.5 #x轴方向最大加速度
 acc_lim_y: 2.5 #y轴方向最大加速度

2.6启动move_base节点

roslaunch diego_nav diego_run_gmapping_amcl_flashgo.launch
此时Diego 1#已经具备了定位,导航的功能,接下来我们进行导航测试

3.导航测试

导航可以在室内固定位置放置障碍物,使用gmapping或者hector绘制有障碍物的地图,然后用上述的方法启动acml和move_base进行定位和导航。

3.1导航测试代码

我们可以修改导航测试代码,将其中的位置改为实际地图上的位置

#!/usr/bin/env python
import roslib;
import rospy
import actionlib
from actionlib_msgs.msg import *
from geometry_msgs.msg import Pose, PoseWithCovarianceStamped, Point, Quaternion, Twist
from move_base_msgs.msg import MoveBaseAction, MoveBaseGoal
from random import sample
from math import pow, sqrt

class NavTest():
def __init__(self):
rospy.init_node('nav_test', anonymous=True)

rospy.on_shutdown(self.shutdown)

# How long in seconds should the robot pause at each location?
self.rest_time = rospy.get_param("~rest_time", 10)

# Are we running in the fake simulator?
self.fake_test = rospy.get_param("~fake_test", False)

# Goal state return values
goal_states = ['PENDING', 'ACTIVE', 'PREEMPTED',
'SUCCEEDED', 'ABORTED', 'REJECTED',
'PREEMPTING', 'RECALLING', 'RECALLED',
'LOST']

# Set up the goal locations. Poses are defined in the map frame.
# An easy way to find the pose coordinates is to point-and-click
# Nav Goals in RViz when running in the simulator.
# Pose coordinates are then displayed in the terminal
# that was used to launch RViz.
locations = dict()

locations['point_0'] = Pose(Point(0.0, 2.0, 0.000), Quaternion(0.000, 0.000, 0.223, 0.975))
locations['point_1'] = Pose(Point(0.5, 2.0, 0.000), Quaternion(0.000, 0.000, -0.670, 0.743))

# Publisher to manually control the robot (e.g. to stop it)
self.cmd_vel_pub = rospy.Publisher('cmd_vel', Twist)

# Subscribe to the move_base action server
self.move_base = actionlib.SimpleActionClient("move_base", MoveBaseAction)

rospy.loginfo("Waiting for move_base action server...")

# Wait 60 seconds for the action server to become available
self.move_base.wait_for_server(rospy.Duration(60))

rospy.loginfo("Connected to move base server")

# A variable to hold the initial pose of the robot to be set by
# the user in RViz
initial_pose = PoseWithCovarianceStamped()

# Variables to keep track of success rate, running time,
# and distance traveled
n_locations = len(locations)
n_goals = 0
n_successes = 0
i = n_locations
distance_traveled = 0
start_time = rospy.Time.now()
running_time = 0
location = ""
last_location = ""

# Get the initial pose from the user
#rospy.loginfo("*** Click the 2D Pose Estimate button in RViz to set the robot's initial pose...")
#rospy.wait_for_message('initialpose', PoseWithCovarianceStamped)
#self.last_location = Pose()
#rospy.Subscriber('initialpose', PoseWithCovarianceStamped, self.update_initial_pose)

# Make sure we have the initial pose
#while initial_pose.header.stamp == "":
# rospy.sleep(1)

rospy.loginfo("Starting navigation test")

# Begin the main loop and run through a sequence of locations
while not rospy.is_shutdown():
# If we've gone through the current sequence,
# start with a new random sequence
if i == n_locations:
i = 0
sequence = sample(locations, n_locations)
# Skip over first location if it is the same as
# the last location
if sequence[0] == last_location:
i = 1

# Get the next location in the current sequence
location = sequence[i]

# Keep track of the distance traveled.
# Use updated initial pose if available.
if initial_pose.header.stamp == "":
distance = sqrt(pow(locations[location].position.x -
locations[last_location].position.x, 2) +
pow(locations[location].position.y -
locations[last_location].position.y, 2))
else:
rospy.loginfo("Updating current pose.")
distance = sqrt(pow(locations[location].position.x -
initial_pose.pose.pose.position.x, 2) +
pow(locations[location].position.y -
initial_pose.pose.pose.position.y, 2))
initial_pose.header.stamp = ""

# Store the last location for distance calculations
last_location = location

# Increment the counters
i += 1
n_goals += 1

# Set up the next goal location
self.goal = MoveBaseGoal()
self.goal.target_pose.pose = locations[location]
self.goal.target_pose.header.frame_id = 'map'
self.goal.target_pose.header.stamp = rospy.Time.now()

# Let the user know where the robot is going next
rospy.loginfo("Going to: " + str(location))

# Start the robot toward the next location
self.move_base.send_goal(self.goal)

# Allow 5 minutes to get there
finished_within_time = self.move_base.wait_for_result(rospy.Duration(300))

# Check for success or failure
if not finished_within_time:
self.move_base.cancel_goal()
rospy.loginfo("Timed out achieving goal")
else:
state = self.move_base.get_state()
if state == GoalStatus.SUCCEEDED:
rospy.loginfo("Goal succeeded!")
n_successes += 1
distance_traveled += distance
rospy.loginfo("State:" + str(state))
else:
rospy.loginfo("Goal failed with error code: " + str(goal_states[state]))

# How long have we been running?
running_time = rospy.Time.now() - start_time
running_time = running_time.secs / 60.0

# Print a summary success/failure, distance traveled and time elapsed
rospy.loginfo("Success so far: " + str(n_successes) + "/" +
str(n_goals) + " = " +
str(100 * n_successes/n_goals) + "%")
rospy.loginfo("Running time: " + str(trunc(running_time, 1)) +
" min Distance: " + str(trunc(distance_traveled, 1)) + " m")
rospy.sleep(self.rest_time)

def update_initial_pose(self, initial_pose):
self.initial_pose = initial_pose

def shutdown(self):
rospy.loginfo("Stopping the robot...")
self.move_base.cancel_goal()
rospy.sleep(2)
self.cmd_vel_pub.publish(Twist())
rospy.sleep(1)

def trunc(f, n):
# Truncates/pads a float f to n decimal places without rounding
slen = len('%.*f' % (n, f))
return float(str(f)[:slen])

if __name__ == '__main__':
try:
NavTest()
rospy.spin()
except rospy.ROSInterruptException:
rospy.loginfo("AMCL navigation test finished.")

3.2启动导航

rosrun diego_nav nav_test.py

启动后,机器人将按照导航测试文件中所设置位置点逐个移动,并可以避开地图中的障碍物,但如果是在狭小的空间,机器会不停的打转,而没有办法出来,这可能与设定的参数有关,要达到好的效果还需要不断的调试参数。

Lesson 10: 导航-基于激光雷达数据构造地图

构建地图是SLAM的基础,定位和路径规划都要基于一定的地图才能实现,本节我们将基于激光雷达,采用gmapping和hector slam两个包分别来构建地图

1.激光雷达

激光雷达具有测量精度高的显著特点,是测量距离非常好的选择,但同时他也具有功耗高,成本高等缺点,目前普遍使用的2D的激光雷达,智能测量一个平面内周围的距离,而3D激光雷达的成本居高不下,目前还不能被广泛推广。2D激光雷达只能测量一个平面的问题,也影响其使用的场景,如果用作导航,只适合应用在一些相对简单规则的环境中。

在Diego机器人中,我们使用思岚A2雷达,该雷达的特性如下:

•360度全方位扫描,10赫兹自适应扫描频率,每秒4000次激光测距
•0.15~6米的测距范围
•角度分辨率 0.9°
•测距分辨率<1%

Rplidar采用USB接口与主机连接,同时具备供电和数据传输的功能,所以安装起来还是很方便的,只需要插在主机的USB接口上即可

1.1. 激光雷达ROS驱动包的安装

执行如下命令

cd ~/catkin_ws/src
git clone https://github.com/robopeak/rplidar_ros.git
cd ..
catkin_make

执行完成后在src目录下增加一个rplidar_ros的目录,就是rplidar的ROS包

 

 

 

 

 

安装完驱动后,这时候把rplidar插到usb口上,会出现一个新串口设备

ls /dev

2.使用hector slam构建地图

2.1安装hector slam

sudo apt-get install ros-kinetic-hector-slam

2.2编写hector slam启动launch文件

下面代码是写好的hector启动launch文件,这里需要注意的一点需要在launch文件中编写一个static tf类型base_frame_2_laser坐标,建立底盘与激光雷达坐标系的映射。

<?xml version="1.0"?>

<launch>
  <arg name="tf_map_scanmatch_transform_frame_name" default="/odom_laser"/>
  <arg name="pub_map_odom_transform" value="true"/> 
  <arg name="map_frame" value="map"/> 
  <arg name="base_frame" value="base_link"/> 
  <arg name="odom_frame" value="odom"/>
  <arg name="scan_subscriber_queue_size" default="5"/>
  <arg name="scan_topic" default="scan"/>
  <arg name="map_size" default="1024"/>

  <master auto="start"/>

  <include file="$(find rplidar_ros)/launch/rplidar.launch" />

  <node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen">
      <rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" />
  </node> 

  <node pkg="tf" type="static_transform_publisher" name="base_frame_2_laser" args="-0.025 0.02 0.095 0 0 0 /$(arg base_frame) /laser 100"/>  

  <node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen">   
  <!-- Frame names -->
  <param name="map_frame" value="$(arg map_frame)" />
  <param name="base_frame" value="$(arg base_frame)" />
  <param name="odom_frame" value="$(arg odom_frame)" />   
   <!-- Tf use -->
  <param name="use_tf_scan_transformation" value="true"/>
  <param name="use_tf_pose_start_estimate" value="false"/>
  <param name="pub_map_odom_transform" value="$(arg pub_map_odom_transform)"/> 
  <!-- Map size / start point -->
  <param name="map_resolution" value="0.050"/>
  <param name="map_size" value="$(arg map_size)"/>
  <param name="map_start_x" value="0.5"/>
  <param name="map_start_y" value="0.5" />
  <param name="map_multi_res_levels" value="2" />
  <!-- Map update parameters -->
  <param name="update_factor_free" value="0.4"/>
  <param name="update_factor_occupied" value="0.7" />   
  <param name="map_update_distance_thresh" value="0.2"/>
  <param name="map_update_angle_thresh" value="0.9" />
  <param name="laser_z_min_value" value = "-1.0" />
  <param name="laser_z_max_value" value = "1.0" />  
  <param name="laser_max_dist" value = "5.8" />
  <param name="laser_min_dist" value = "0.15" /> 
  <!-- Advertising config -->
  <param name="advertise_map_service" value="true"/>
  <param name="scan_subscriber_queue_size" value="$(arg scan_subscriber_queue_size)"/>
  <param name="scan_topic" value="$(arg scan_topic)"/>
  <param name="tf_map_scanmatch_transform_frame_name" value="$(arg tf_map_scanmatch_transform_frame_name)" />
  </node>

  <include file="$(find hector_geotiff)/launch/geotiff_mapper.launch"/> 

</launch>

2.3 启动hector slam

现在就可以执行如下代码,来启动激光雷达

roslaunch diego_nav diego_run_hector_rplidar.launch
 这时我们可以通过键盘控制diego#在房间内运动,启动rviz就可以绘制出相应的地图,在不同终端中执行如下命令
rosrun teleop_twist_keyboard teleop_twist_keyboard.py
rosrun rviz rviz

3.使用gmapping绘制地图

3.1安装ros 导航包

sudo apt-get install ros-kinetic-navigation

执行完后将安装gmapping,acml,move_base

3.2编写gmapping包启动launch文件

<launch>
<master auto="start"/>

<include file="$(find rplidar_ros)/launch/rplidar.launch" />

<node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen">
<rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" />
</node>

<node pkg="tf" type="static_transform_publisher" name="base_frame_2_laser_link" args="-0.025 0.02 0.095 0 0 0 /base_link /laser 50"/>

<!-- gmapping node -->
<node pkg="gmapping" type="slam_gmapping" name="slam_gmapping">
<param name="base_frame" value="base_link"/>
<param name="odom_frame" value="odom"/>
<param name="maxUrange" value="6.0"/>
<param name="maxRange" value="5.0"/>
<param name="sigma" value="0.05"/>
<param name="kernelSize" value="3"/>
<param name="lstep" value="0.05"/>
<param name="astep" value="0.05"/>
<param name="iterations" value="5"/>
<param name="lsigma" value="0.075"/>
<param name="ogain" value="3.0"/>
<param name="lskip" value="0"/>
<param name="minimumScore" value="50"/>
<param name="srr" value="0.01"/>
<param name="srt" value="0.02"/>
<param name="str" value="0.01"/>
<param name="stt" value="0.02"/>
<param name="linearUpdate" value="0.05"/>
<param name="angularUpdate" value="0.0436"/>
<param name="temporalUpdate" value="-1.0"/>
<param name="resampleThreshold" value="0.5"/>
<param name="particles" value="30"/>
<!--
<param name="xmin" value="-50.0"/>
<param name="ymin" value="-50.0"/>
<param name="xmax" value="50.0"/>
<param name="ymax" value="50.0"/>
make the starting size small for the benefit of the Android client's memory...
-->
<param name="xmin" value="-1.0"/>
<param name="ymin" value="-1.0"/>
<param name="xmax" value="1.0"/>
<param name="ymax" value="1.0"/>

<param name="delta" value="0.05"/>
<param name="llsamplerange" value="0.01"/>
<param name="llsamplestep" value="0.01"/>
<param name="lasamplerange" value="0.005"/>
<param name="lasamplestep" value="0.005"/>
</node>
</launch>

3.3 启动gmapping

现在就可以执行如下代码,来启动激光雷达

roslaunch diego_nav diego_run_gmapping_rplidar.launch
 这时我们可以通过键盘控制diego#在房间内运动,启动rviz就可以绘制出相应的地图,在不同终端中执行如下命令
rosrun teleop_twist_keyboard teleop_twist_keyboard.py
rosrun rviz rviz

4.生成地图文件

无论使用hector还是gmapping动态创建地图的时候,我们可以通过命令生成地图文件

4.1.创建maps文件夹

地图文件需要赋予其777的权限,既可以让map_server在此文件夹中生成地图文件

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4.2.生成地图文件

打开终端cd进入maps目录,并执行地图生成命令

cd ~/catkin_ws/src/diego_nav/maps/

rosrun map_server map_saver -f rplidar_gmapping

Lesson 9: 导航-Diego 1# 的导航框架

机器人的SLAM导航,是一个比较复杂的问题,设计到很多数学模型和算法,好在ROS平台,这些模型算法都已经被很好的实现了,而且封装成了符合ROS框架的功能包,使得SLAM实现起来已经变得比较容易了,所以在diego 1#机器人中我们使用ROS的导航框架。

1. ROS的导航框架

下图是官方提供的导航框架,官方说明http://wiki.ros.org/navigation/Tutorials/RobotSetup

在此框架中白色和灰色的都是ROS已经封装好的功能包:

这其中gmapping 和hector的功能是一样的,都是用来构建地图的,区别在于gmapping在构建过程中依赖于机器人的里程odom数据,hector则不需要odom,算法不同,在实际使用中选用其中的一个就可以了。

而蓝色标识的功能包则需要根据选用硬件平台的不同来定制。

2. Diego 1#机器人中功能包对应

这里在官方导航框架图上用不同的色块标注了,各个功能块需要包,和对应的硬件

ROS导航功能软件包硬件资源说明
move_basemove_base.树莓派
map_servergmapping
. hector
. 树莓派
. 激光雷达
. xtion
激光雷达和xtion深度相机,可以二选其一
acmlacml. 树莓派
. 激光雷达
. xtion
激光雷达和xtion深度相机,可以二选其一
base_controllerros_arduino_bridge.Arduino UNOros_arduino_bridge包包含了base_controller
odometer sourceros_arduino_bridge.Arduino UNO
.霍尔编码器
ros_arduino_firmware获取霍尔编码器数据,在ros_arduino_bridge中经过计算发布odom消息
sensor transform. 树莓派tf基本上都是静态的,可以在launch文件中实现
sensor sourcerplidar A2 Driver
OpenNI
. 树莓派
. 激光雷达
. xtion
rplidar激光雷达本身提供laser数据发布的驱动包
如果用深度相机,可以使用OpenNI包发布laser数据

Lesson 8:运动控制-角速度标定

1.角速度标定原理及其代码

与线速度标定的原理类似,角速度标定即发送控制指令控制机器人旋转一定的角度,观察机器人旋转的角度是否与控制指令一致,由于硬件本身的精密度限制,只要误差在可接受的范围内即可。

在这里我们命令控制机器人顺时针旋转360度,如下是角速度标定的代码:

#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist, Quaternion
from nav_msgs.msg import Odometry
import tf
from math import radians, copysign
from transform_utils import quat_to_angle, normalize_angle
import PyKDL
from math import pi

class CalibrateAngular():
    def __init__(self):
        # Give the node a name
        rospy.init_node('calibrate_angular', anonymous=False)

        # Set rospy to execute a shutdown function when terminating the script
        rospy.on_shutdown(self.shutdown)

        # How fast will we check the odometry values?
        self.rate = 10
        r = rospy.Rate(self.rate)

        # The test angle is 360 degrees
        self.test_angle = 2*pi #这里注意,在ROS中使用的弧度,不能直接写360

        self.speed = 0.1 # radians per second
        self.tolerance = 1 # degrees converted to radians
        self.odom_angular_scale_correction = 1
        self.start_test = True

        # Publisher to control the robot's speed
        self.cmd_vel = rospy.Publisher('/cmd_vel', Twist, queue_size=5)

        # The base frame is usually base_link or base_footprint
        self.base_frame = rospy.get_param('~base_frame', '/base_link')

        # The odom frame is usually just /odom
        self.odom_frame = rospy.get_param('~odom_frame', '/odom')

        # Initialize the tf listener
        self.tf_listener = tf.TransformListener()

        # Give tf some time to fill its buffer
        rospy.sleep(2)

        # Make sure we see the odom and base frames
        self.tf_listener.waitForTransform(self.odom_frame, self.base_frame, rospy.Time(), rospy.Duration(60.0))

        rospy.loginfo("Bring up rqt_reconfigure to control the test.")

        reverse = 1

        while not rospy.is_shutdown():
            if self.start_test:
                # Get the current rotation angle from tf
                self.odom_angle = self.get_odom_angle()
                rospy.loginfo("self.odom_angle: "+str(self.odom_angle))

                last_angle = self.odom_angle
                turn_angle = 0
                self.test_angle *= reverse
                error = self.test_angle - turn_angle
                rospy.loginfo("errir: "+str(error))

                # Alternate directions between tests
                reverse = -reverse

                while abs(error) > self.tolerance and self.start_test:
                    if rospy.is_shutdown():
                        return
                    rospy.loginfo("*************************** ")
                    # Rotate the robot to reduce the error
                    move_cmd = Twist()
                    move_cmd.angular.z = copysign(self.speed, error)
                    rospy.loginfo("move_cmd.angular.z: "+str(move_cmd.angular.z))
                    self.cmd_vel.publish(move_cmd)
                    r.sleep()

                    # Get the current rotation angle from tf                   
                    self.odom_angle = self.get_odom_angle()
                    rospy.loginfo("current rotation angle: "+str(self.odom_angle))

                    # Compute how far we have gone since the last measurement
                    delta_angle = self.odom_angular_scale_correction * normalize_angle(self.odom_angle - last_angle)
                    rospy.loginfo("delta_angle: "+str(delta_angle))

                    # Add to our total angle so far
                    turn_angle += abs(delta_angle)
                    rospy.loginfo("turn_angle: "+str(turn_angle))

                    # Compute the new error
                    error = self.test_angle - turn_angle
                    rospy.loginfo("error: "+str(error))

                    # Store the current angle for the next comparison
                    last_angle = self.odom_angle

                # Stop the robot
                self.cmd_vel.publish(Twist())

                # Update the status flag
                self.start_test = False
                params = {'start_test': False}
                dyn_client.update_configuration(params)

            rospy.sleep(0.5)

        # Stop the robot
        self.cmd_vel.publish(Twist())

    def get_odom_angle(self):
        # Get the current transform between the odom and base frames
        try:
            (trans, rot)  = self.tf_listener.lookupTransform(self.odom_frame, self.base_frame, rospy.Time(0))
        except (tf.Exception, tf.ConnectivityException, tf.LookupException):
            rospy.loginfo("TF Exception")
            return

        # Convert the rotation from a quaternion to an Euler angle

        return quat_to_angle(Quaternion(*rot))

    def shutdown(self):
        # Always stop the robot when shutting down the node
        rospy.loginfo("Stopping the robot...")
        self.cmd_vel.publish(Twist())
        rospy.sleep(1)
if __name__ == '__main__':
    try:
        CalibrateAngular()
    except:
        rospy.loginfo("Calibration terminated.")

2. 代码说明

首先导入需要的包

import rospy
from geometry_msgs.msg import Twist, Point
from math import copysign, sqrt, pow
import tf

ROS标准化初始化代码,初始化话节点,设定发布的频率


class CalibrateLinear():
    def __init__(self):
        # Give the node a name
        rospy.init_node('calibrate_linear', anonymous=False)

        # Set rospy to execute a shutdown function when terminating the script
        rospy.on_shutdown(self.shutdown)

        # How fast will we check the odometry values?
        self.rate = 10
        r = rospy.Rate(self.rate)

设定标定的参数

test_angle:标定旋转的角度,这里使用的单位是弧度;speed:前进的线速度;tolerance:精度范围;odom_angular_scale_correction:里程数据的缩放比例;start_test:开始测试的开关

        # The test angle is 360 degrees
        self.test_angle = 2*pi #

        self.speed = 0.1 # radians per second
        self.tolerance = 1 # degrees converted to radians
        self.odom_angular_scale_correction = 1
        self.start_test = True

发布Twist消息,控制小车前进

        # Publisher to control the robot's speed
        self.cmd_vel = rospy.Publisher('/cmd_vel', Twist, queue_size=5)

 初始化base_frame, odom_frame,tf listener

        # The base frame is base_footprint for the TurtleBot but base_link for Pi Robot
        self.base_frame = rospy.get_param('~base_frame', '/base_link')

        # The odom frame is usually just /odom
        self.odom_frame = rospy.get_param('~odom_frame', '/odom')

        # Initialize the tf listener
        self.tf_listener = tf.TransformListener()

        # Give tf some time to fill its buffer
        rospy.sleep(2)

        # Make sure we see the odom and base frames
        self.tf_listener.waitForTransform(self.odom_frame, self.base_frame, rospy.Time(), rospy.Duration(60.0))

        rospy.loginfo("Bring up rqt_reconfigure to control the test.")

初始化初始化位置

        self.position = Point()

        # Get the starting position from the tf transform between the odom and base frames
        self.position = self.get_position()

        x_start = self.position.x
        y_start = self.position.y

        move_cmd = Twist()

接下来的while循环代码控制机器人向前行驶1m,具体可以看代码解释

        while not rospy.is_shutdown():
            if self.start_test:
                # Get the current rotation angle from tf根据tf的信息获取当前旋转角度
                self.odom_angle = self.get_odom_angle()
                rospy.loginfo("self.odom_angle: "+str(self.odom_angle))

                last_angle = self.odom_angle
                turn_angle = 0
                self.test_angle *= reverse
                error = self.test_angle - turn_angle
                rospy.loginfo("errir: "+str(error))

                # Alternate directions between tests可以控制旋转的方向
                reverse = -reverse
                
                #error是与目标旋转角度的差距,while循环的目的就是让error无限接近0。
                while abs(error) > self.tolerance and self.start_test:
                    if rospy.is_shutdown():
                        return
                    rospy.loginfo("*************************** ")
                    # Rotate the robot to reduce the error旋转机器人,使得error越来越小
                    move_cmd = Twist()
                    move_cmd.angular.z = copysign(self.speed, error)
                    rospy.loginfo("move_cmd.angular.z: "+str(move_cmd.angular.z))
                    self.cmd_vel.publish(move_cmd)
                    r.sleep()

                    # Get the current rotation angle from tf根据获取当前的旋转角度            
                    self.odom_angle = self.get_odom_angle()
                    rospy.loginfo("current rotation angle: "+str(self.odom_angle))

                    # Compute how far we have gone since the last measurement计算还剩余多少角度需要旋转
                    delta_angle = self.odom_angular_scale_correction * normalize_angle(self.odom_angle - last_angle)
                    rospy.loginfo("delta_angle: "+str(delta_angle))

                    # Add to our total angle so far
                    turn_angle += abs(delta_angle)
                    rospy.loginfo("turn_angle: "+str(turn_angle))

                    # Compute the new error计算新的error
                    error = self.test_angle - turn_angle
                    rospy.loginfo("error: "+str(error))

                    # Store the current angle for the next comparison存储当前的角度,为下一次计算
                    last_angle = self.odom_angle

                # Stop the robot停止机器人
                self.cmd_vel.publish(Twist())

                # Update the status flag更新状态
                self.start_test = False
                params = {'start_test': False}
                dyn_client.update_configuration(params)

            rospy.sleep(0.5)

根据odom获取当前的行进角度

    def get_odom_angle(self):
        # Get the current transform between the odom and base frames
        try:
            (trans, rot)  = self.tf_listener.lookupTransform(self.odom_frame, self.base_frame, rospy.Time(0))
        except (tf.Exception, tf.ConnectivityException, tf.LookupException):
            rospy.loginfo("TF Exception")
            return

        # Convert the rotation from a quaternion to an Euler angle

        return quat_to_angle(Quaternion(*rot))

停止节点

    def shutdown(self):
        # Always stop the robot when shutting down the node
        rospy.loginfo("Stopping the robot...")
        self.cmd_vel.publish(Twist())
        rospy.sleep(1)

3.标定

有了上面的标定节点,我们可以标定机器人的角速度。首先找一块平坦可以行进2*2m的空间,把机器人按照一定的方向放置,执行如下命令,观察机器人是否旋转360度:

rosrun diego_nav calibrate_angular.py

影响角速度的主要参数是wheel_track,如果配置文件中的参数与实际数据一致,但还是达不到控制精度,如果发现机器人不能按照要求旋转固定角度,可以适当调整此参数来做补偿,直到达到满意的效果。

如果要达到高精度的控制效果,必须使用高精度的硬件。

Lesson 7: 运动控制-线速度标定

经过前面的步骤,机器人已经可以动起来了,但是此时机器人也还只是动起来,还不能达到比较精确的控制,要想要达到比较理想的控制精度,必须对机器人进行标定,也就是让机器人前进的距离,和转弯的角度,都是按照指令要求进行的。标定分为线速度标定,和角速度标定,本节中讲介绍线速度标定的方法。

1. 线速度标定原理及代码

线速度的标定原理就是发送指令控制机器人行驶一定的距离,观察机器人行驶的距离是否可以和控制指令一致,由于硬件本身的精密度限制,只要误差在可接受的范围内即可。

在这里我们命令控制机器人向前显示1m的距离,如下是线速度标定的代码:

#!/usr/bin/env python
import rospy
from geometry_msgs.msg import Twist, Point
from math import copysign, sqrt, pow
import tf
class CalibrateLinear():
    def __init__(self):
        # Give the node a name
        rospy.init_node('calibrate_linear', anonymous=False)

        # Set rospy to execute a shutdown function when terminating the script
        rospy.on_shutdown(self.shutdown)

        # How fast will we check the odometry values?
        self.rate = 10
        r = rospy.Rate(self.rate)

        # Set the distance to travel
        self.test_distance = 1.0 # meters
        self.speed = 1.0 # meters per second
        self.tolerance = 0.01 # meters
        self.odom_linear_scale_correction = 1.0
        self.start_test = True

        # Publisher to control the robot's speed
        self.cmd_vel = rospy.Publisher('/cmd_vel', Twist, queue_size=5)

        # The base frame is base_footprint for the TurtleBot but base_link for Pi Robot
        self.base_frame = rospy.get_param('~base_frame', '/base_link')

        # The odom frame is usually just /odom
        self.odom_frame = rospy.get_param('~odom_frame', '/odom')

        # Initialize the tf listener
        self.tf_listener = tf.TransformListener()

        # Give tf some time to fill its buffer
        rospy.sleep(2)

        # Make sure we see the odom and base frames
        self.tf_listener.waitForTransform(self.odom_frame, self.base_frame, rospy.Time(), rospy.Duration(60.0))

        rospy.loginfo("Bring up rqt_reconfigure to control the test.")

        self.position = Point()

        # Get the starting position from the tf transform between the odom and base frames
        self.position = self.get_position()

        x_start = self.position.x
        y_start = self.position.y

        move_cmd = Twist()

        while not rospy.is_shutdown():
            # Stop the robot by default
            move_cmd = Twist()

            if self.start_test:
                # Get the current position from the tf transform between the odom and base frames
                self.position = self.get_position()

                # Compute the Euclidean distance from the target point
                distance = sqrt(pow((self.position.x - x_start), 2) +
                                pow((self.position.y - y_start), 2))

                # Correct the estimated distance by the correction factor
                distance *= self.odom_linear_scale_correction
                # How close are we?
                error =  distance - self.test_distance

                # Are we close enough?
                if not self.start_test or abs(error) <  self.tolerance:
                    self.start_test = False
                    params = False
                    rospy.loginfo(params)
                else:
                    # If not, move in the appropriate direction
                    move_cmd.linear.x = copysign(self.speed, -1 * error)
            else:
                self.position = self.get_position()
                x_start = self.position.x
                y_start = self.position.y

            self.cmd_vel.publish(move_cmd)
            r.sleep()

        # Stop the robot
        self.cmd_vel.publish(Twist())

    def get_position(self):
        # Get the current transform between the odom and base frames
        try:
            (trans, rot)  = self.tf_listener.lookupTransform(self.odom_frame, self.base_frame, rospy.Time(0))
        except (tf.Exception, tf.ConnectivityException, tf.LookupException):
            rospy.loginfo("TF Exception")
            return

        return Point(*trans)

    def shutdown(self):
        # Always stop the robot when shutting down the node
        rospy.loginfo("Stopping the robot...")
        self.cmd_vel.publish(Twist())
        rospy.sleep(1)

if __name__ == '__main__':
    try:
        CalibrateLinear()
        rospy.spin()
    except:
        rospy.loginfo("Calibration terminated.")

2. 代码说明

首先导入需要的包

import rospy
from geometry_msgs.msg import Twist, Point
from math import copysign, sqrt, pow
import tf

ROS标准化初始化代码,初始化话节点,设定发布的频率


class CalibrateLinear():
    def __init__(self):
        # Give the node a name
        rospy.init_node('calibrate_linear', anonymous=False)

        # Set rospy to execute a shutdown function when terminating the script
        rospy.on_shutdown(self.shutdown)

        # How fast will we check the odometry values?
        self.rate = 10
        r = rospy.Rate(self.rate)

设定标定的参数

test_distance:标定前进的距离;speed:前进的速度;tolerance:精度范围;odom_linear_scale_correction:里程数据的缩放比例;start_test:开始测试的开关

       # Set the distance to travel
        self.test_distance = 1.0 # meters
        self.speed = 1.0 # meters per second
        self.tolerance = 0.01 # meters
        self.odom_linear_scale_correction = 1.0
        self.start_test = True

发布Twist消息,控制小车前进

        # Publisher to control the robot's speed
        self.cmd_vel = rospy.Publisher('/cmd_vel', Twist, queue_size=5)

 初始化base_frame, odom_frame,tf listener

        # The base frame is base_footprint for the TurtleBot but base_link for Pi Robot
        self.base_frame = rospy.get_param('~base_frame', '/base_link')

        # The odom frame is usually just /odom
        self.odom_frame = rospy.get_param('~odom_frame', '/odom')

        # Initialize the tf listener
        self.tf_listener = tf.TransformListener()

        # Give tf some time to fill its buffer
        rospy.sleep(2)

        # Make sure we see the odom and base frames
        self.tf_listener.waitForTransform(self.odom_frame, self.base_frame, rospy.Time(), rospy.Duration(60.0))

        rospy.loginfo("Bring up rqt_reconfigure to control the test.")

初始化初始化位置

        self.position = Point()

        # Get the starting position from the tf transform between the odom and base frames
        self.position = self.get_position()

        x_start = self.position.x
        y_start = self.position.y

        move_cmd = Twist()

接下来的while循环代码控制机器人向前行驶1m,具体可以看代码解释


        while not rospy.is_shutdown():
            # Stop the robot by default
            move_cmd = Twist()

            if self.start_test:
                # Get the current position from the tf transform between the odom and base frames
                self.position = self.get_position() #获取当前的位置信息

                # 计算当前位置与起始位置的距离
                distance = sqrt(pow((self.position.x - x_start), 2) +
                                pow((self.position.y - y_start), 2))

                # Correct the estimated distance by the correction factor
                distance *= self.odom_linear_scale_correction

                # 计算与目标位置的距离
                error =  distance - self.test_distance

                # Are we close enough?
                if not self.start_test or abs(error) <  self.tolerance: #如果已经到达目标位置,则停止
                    self.start_test = False
                    params = False
                    rospy.loginfo(params)
                else:
                    # 如果还没有到达,则继续前进,如果已经超出了目标位置,这控制电机反转,退回
                    move_cmd.linear.x = copysign(self.speed, -1 * error)
            else:
                self.position = self.get_position()
                x_start = self.position.x
                y_start = self.position.y

            self.cmd_vel.publish(move_cmd)#发布控制Twist消息
            r.sleep()

根据tf数据获取当前位置

    def get_position(self):
        # Get the current transform between the odom and base frames
        try:
            (trans, rot)  = self.tf_listener.lookupTransform(self.odom_frame, self.base_frame, rospy.Time(0))
        except (tf.Exception, tf.ConnectivityException, tf.LookupException):
            rospy.loginfo("TF Exception")
            return

        return Point(*trans)

停止节点

    def shutdown(self):
        # Always stop the robot when shutting down the node
        rospy.loginfo("Stopping the robot...")
        self.cmd_vel.publish(Twist())
        rospy.sleep(1)

3.标定

有了上面的标定节点,我们可以标定机器人的线速度。首先找一块平坦可以行进1m的空间,使用尺子测量一个1米的距离,把机器人放在起点,执行如下命令,观察机器人是否可以前进1m

rosrun diego_nav calibrate_linear.py 

如果能一次运行刚好是1m那当然是理想的效果,如果不理想,请检查my_arduino_params.yaml文件中有关机器人的参数了是否正确,另外要注意的是ROS里面使用的单位是 米,一定要注意单位的换算;

# === Robot drivetrain parameters
wheel_diameter: 0.02900
wheel_track: 0.18
encoder_resolution: 2 # from Pololu for 131:1 motors
gear_reduction: 75.0
motors_reversed: True

如果确定配置文件中的参数与实际数据一致,但还是达不到控制精度,这可能与电机的性能有关系,这时候了可以适当的调整my_arduino_params.yaml中的参数来做补偿,直到达到满意的效果。

如果要达到高精度的控制效果,必须使用高精度的硬件。

Lesson 6: 运动控制-两侧驱动马达独立PID调速

 

ros_arduino_bridge的base controller中对两个电机的PID控制是用的一套PID参数,但实际的使用中,由于马达的特性,地形,或者机器人的载重的平衡性的诸多问题使机器人不能按照预定轨迹行驶,最简单的就是机器人是否能够走直线,这就需要对两个电机分别进行PID调速,本节中将对ros_arduino_bridge进行修改,以使支持对两路电机用不同的PID参数调速

1.首先修改arduino代码

1.1.修改diff_controller.h文件
增加左右两个马达的PID控制变量:

/* PID Parameters */
int Kp = 20;
int Kd = 12;
int Ki = 0;
int Ko = 50;

int left_Kp=Kp;
int left_Kd=Kd;
int left_Ki=Ki;
int left_Ko=Ko;

int right_Kp=Kp;
int right_Kd=Kd;
int right_Ki=Ki;
int right_Ko=Ko;

修改resetPID函数

void resetPID(){
 leftPID.TargetTicksPerFrame = 0.0;
 leftPID.Encoder = readEncoder(LEFT);
 leftPID.PrevEnc = leftPID.Encoder;
 leftPID.output = 0;
 leftPID.PrevInput = 0;
 leftPID.ITerm = 0;

 rightPID.TargetTicksPerFrame = 0.0;
 rightPID.Encoder = readEncoder(RIGHT);
 rightPID.PrevEnc = rightPID.Encoder;
 rightPID.output = 0;
 rightPID.PrevInput = 0;
 rightPID.ITerm = 0;
}

分别定义左右两个马达的dorightPID()和doleftPID()函数

/* PID routine to compute the next motor commands */
void dorightPID(SetPointInfo * p) {
  long Perror;
  long output;
  int input;

  //Perror = p->TargetTicksPerFrame - (p->Encoder - p->PrevEnc);
  input = p->Encoder - p->PrevEnc;
  Perror = p->TargetTicksPerFrame - input;

  /*
  * Avoid derivative kick and allow tuning changes,
  * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-derivative-kick/
  * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/
  */
  //output = (Kp * Perror + Kd * (Perror - p->PrevErr) + Ki * p->Ierror) / Ko;
  // p->PrevErr = Perror;
  output = (right_Kp * Perror - right_Kd * (input - p->PrevInput) + p->ITerm) / right_Ko;
  p->PrevEnc = p->Encoder;

  output += p->output;
  // Accumulate Integral error *or* Limit output.
  // Stop accumulating when output saturates
  if (output >= MAX_PWM)
    output = MAX_PWM;
  else if (output <= -MAX_PWM)
    output = -MAX_PWM;
  else
  /*
  * allow turning changes, see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/
  */
    p->ITerm += Ki * Perror;

  p->output = output;
  p->PrevInput = input;
}

/* PID routine to compute the next motor commands */
void doleftPID(SetPointInfo * p) {
  long Perror;
  long output;
  int input;

  //Perror = p->TargetTicksPerFrame - (p->Encoder - p->PrevEnc);
  input = p->Encoder - p->PrevEnc;
  Perror =p->TargetTicksPerFrame + input;

  /*
  * Avoid derivative kick and allow tuning changes,
  * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-derivative-kick/
  * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/
  */
  //output = (Kp * Perror + Kd * (Perror - p->PrevErr) + Ki * p->Ierror) / Ko;
  // p->PrevErr = Perror;
  output = (left_Kp * Perror - left_Kd * (input - p->PrevInput) + p->ITerm) / left_Ko;
  p->PrevEnc = p->Encoder;

  output += p->output;
  // Accumulate Integral error *or* Limit output.
  // Stop accumulating when output saturates
  if (output >= MAX_PWM)
    output = MAX_PWM;
  else if (output <= -MAX_PWM)
    output = -MAX_PWM;
  else
  /*
  * allow turning changes, see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/
  */
    p->ITerm += Ki * Perror;

  p->output = output;
  p->PrevInput = input;
}

修改updatePID()函数

void updatePID() {
  /* Read the encoders */
  leftPID.Encoder =readEncoder(LEFT);
  rightPID.Encoder = readEncoder(RIGHT);

  /* If we're not moving there is nothing more to do */
  if (!moving){
    /*
    * Reset PIDs once, to prevent startup spikes,
    * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-initialization/
    * PrevInput is considered a good proxy to detect
    * whether reset has already happened
    */
    if (leftPID.PrevInput != 0 || rightPID.PrevInput != 0) resetPID();
    return;
  }

  /* Compute PID update for each motor */
  dorightPID(&rightPID);//执行右马达PID
  doleftPID(&leftPID);//执行左马达PID

  /* Set the motor speeds accordingly */
  setMotorSpeeds(leftPID.output, rightPID.output);

}

1.2. 修改encoder_driver.ino文件
由于左右两个马达的旋转方向是相反的,所以会有一个编码器的读数反向增加,但我们实际控制的时候,两个马达的行进方向是相同的,这会影响到PID的计算,所以需要对左侧马达的编码器读书取反,修改readEncoder函数。

 /* Wrap the encoder reading function */
 long readEncoder(int i) {
 if (i == LEFT) return 0-left_enc_pos;
 else return right_enc_pos;
 }

1.3.修改ROSArduinoBridge.ino文件

修改pid_args的声明为如下代码

int pid_args[8];

修改runCommand()函数,修改case UPDATE_PID,把原来的Kp,Kd,Ki,Ko赋值的代码注释掉,修改为如下的代码

    case UPDATE_PID:
      while ((str = strtok_r(p, ":", &p)) != '\0') {
        pid_args[i] = atoi(str);
        i++;
      }
//      Kp = pid_args[0];
//      Kd = pid_args[1];
//      Ki = pid_args[2];
//      Ko = pid_args[3];

      left_Kp = pid_args[0];
      left_Kd = pid_args[1];
      left_Ki = pid_args[2];
      left_Ko = pid_args[3];

      right_Kp = pid_args[4];
      right_Kd = pid_args[5];
      right_Ki = pid_args[6];
      right_Ko = pid_args[7];
      Serial.println("OK");
      break;

现在arduino端已经支持对两个电机使用不同的PID参数调速了

1.4修改ROSArduinoBridge.ino文件

修改此文件中argv1 argv2两个参数的定义,将其数组长度扩大,原因是这两个参数用于接收上位机发送的命令,我们增加了pid参数,使得命令长度超过了原来这两个参数的长度。

// Character arrays to hold the first and second arguments
char argv1[32];
char argv2[32];

2.修改上位机ROS包的代码

2.1.修改my_arduino_params.yaml
在PID参数部分增加左右两个马达的PID参数

# === PID parameters
Kp: 10
Kd: 12
Ki: 0
Ko: 50
accel_limit: 1.0

left_Kp: 10
left_Kd: 12
left_Ki: 0
left_Ko: 50

right_Kp: 8
right_Kd: 12
right_Ki: 0

2.2.修改arduino_driver.py文件
修改update_pid()函数

def update_pid(self, left_Kp, left_Kd, left_Ki, left_Ko, right_Kp, right_Kd, right_Ki, right_Ko):
        ''' Set the PID parameters on the Arduino
        '''
        print "Updating PID parameters"
        cmd = 'u ' + str(left_Kp) + ':' + str(left_Kd) + ':' + str(left_Ki) + ':' + str(left_Ko) + ':' + str(right_Kp) + ':' + str(right_Kd) + ':' + str(right_Ki) + ':' + str(right_Ko)
        self.execute_ack(cmd)  

2.3.修改base_controller.py文件
修改def init(self, arduino, base_frame):函数,增加左右两个马达PID参数的初始化代码

def __init__(self, arduino, base_frame):
        self.arduino = arduino
        self.base_frame = base_frame
        self.rate = float(rospy.get_param("~base_controller_rate", 10))
        self.timeout = rospy.get_param("~base_controller_timeout", 1.0)
        self.stopped = False

        pid_params = dict()
        pid_params['wheel_diameter'] = rospy.get_param("~wheel_diameter", "") 
        pid_params['wheel_track'] = rospy.get_param("~wheel_track", "")
        pid_params['encoder_resolution'] = rospy.get_param("~encoder_resolution", "") 
        pid_params['gear_reduction'] = rospy.get_param("~gear_reduction", 1.0)

        #modify by william 增加左右两个马达的PID参数的初始化代码
        pid_params['left_Kp'] = rospy.get_param("~left_Kp", 20)
        pid_params['left_Kd'] = rospy.get_param("~left_Kd", 12)
        pid_params['left_Ki'] = rospy.get_param("~left_Ki", 0)
        pid_params['left_Ko'] = rospy.get_param("~left_Ko", 50)
        pid_params['right_Kp'] = rospy.get_param("~right_Kp", 20)
        pid_params['right_Kd'] = rospy.get_param("~right_Kd", 12)
        pid_params['right_Ki'] = rospy.get_param("~right_Ki", 0)
        pid_params['right_Ko'] = rospy.get_param("~right_Ko", 50)

        self.accel_limit = rospy.get_param('~accel_limit', 0.1)
        self.motors_reversed = rospy.get_param("~motors_reversed", False)

        # Set up PID parameters and check for missing values
        self.setup_pid(pid_params)

        # How many encoder ticks are there per meter?
        self.ticks_per_meter = self.encoder_resolution * self.gear_reduction  / (self.wheel_diameter * pi)

        # What is the maximum acceleration we will tolerate when changing wheel speeds?
        self.max_accel = self.accel_limit * self.ticks_per_meter / self.rate

        # Track how often we get a bad encoder count (if any)
        self.bad_encoder_count = 0

        now = rospy.Time.now()    
        self.then = now # time for determining dx/dy
        self.t_delta = rospy.Duration(1.0 / self.rate)
        self.t_next = now + self.t_delta

        # internal data        
        self.enc_left = None            # encoder readings
        self.enc_right = None
        self.x = 0                      # position in xy plane
        self.y = 0
        self.th = 0                     # rotation in radians
        self.v_left = 0
        self.v_right = 0
        self.v_des_left = 0             # cmd_vel setpoint
        self.v_des_right = 0
        self.last_cmd_vel = now

        # subscriptions
        rospy.Subscriber("cmd_vel", Twist, self.cmdVelCallback)

        # Clear any old odometry info
        self.arduino.reset_encoders()

        # Set up the odometry broadcaster
        self.odomPub = rospy.Publisher('odom', Odometry)
        self.odomBroadcaster = TransformBroadcaster()

        rospy.loginfo("Started base controller for a base of " + str(self.wheel_track) + "m wide with " + str(self.encoder_resolution) + " ticks per rev")
        rospy.loginfo("Publishing odometry data at: " + str(self.rate) + " Hz using " + str(self.base_frame) + " as base frame")

修改def setup_pid(self, pid_params):

def setup_pid(self, pid_params):
        # Check to see if any PID parameters are missing
        missing_params = False
        for param in pid_params:
            if pid_params[param] == "":
                print("*** PID Parameter " + param + " is missing. ***")
                missing_params = True

        if missing_params:
            os._exit(1)

        self.wheel_diameter = pid_params['wheel_diameter']
        self.wheel_track = pid_params['wheel_track']
        self.encoder_resolution = pid_params['encoder_resolution']
        self.gear_reduction = pid_params['gear_reduction']

        #self.Kp = pid_params['Kp']
        #self.Kd = pid_params['Kd']
        #self.Ki = pid_params['Ki']
        #self.Ko = pid_params['Ko']

        #self.arduino.update_pid(self.Kp, self.Kd, self.Ki, self.Ko)

        #modify by william
        self.left_Kp = pid_params['left_Kp']
        self.left_Kd = pid_params['left_Kd']
        self.left_Ki = pid_params['left_Ki']
        self.left_Ko = pid_params['left_Ko']

        self.right_Kp = pid_params['right_Kp']
        self.right_Kd = pid_params['right_Kd']
        self.right_Ki = pid_params['right_Ki']
        self.right_Ko = pid_params['right_Ko']

        #传递8个参数给update_pid函数
        self.arduino.update_pid(self.left_Kp, self.left_Kd, self.left_Ki, self.left_Ko, self.right_Kp, self.right_Kd, self.right_Ki, self.right_Ko)

修改完上述代码后ros_arduino_bridge可以分别对两个马达设置不同的PID参数进行调速。

3.PID调速

两个电机的PID调速是比较麻烦,需要耐心的不断修改相应参数,使两个电机的转速能够一致,笔者的经验是先调试好一侧的马达,在以这个马达作为标的物,调整另外一个马达的PID参数,当然控制精度还受到电机性能和编码器精度的限制,为了方便PID调速这里我们通过rqt_plot图形曲线把两个电机的速度,PID的输入输出显示出来。

3.1. 修改上位机base_controller.py文件

修改def init(self, arduino, base_frame):函数,在开头处增加self.debugPID参数,debugPID=true,启动调试,debugPID=false,关闭调试

class BaseController:
     def __init__(self, arduino, base_frame):
     self.arduino = arduino
     self.base_frame = base_frame
     self.rate = float(rospy.get_param("~base_controller_rate", 10))
     self.timeout = rospy.get_param("~base_controller_timeout", 1.0)
     self.stopped = False
     self.debugPID=False

修改def init(self, arduino, base_frame):函数,在函数尾部增加如下代码

if self.debugPID:
     self.lEncoderPub = rospy.Publisher('Lencoder', Int32)
     self.rEncoderPub = rospy.Publisher('Rencoder', Int32)
     self.lPidoutPub = rospy.Publisher('Lpidout', Int32)
     self.rPidoutPub = rospy.Publisher('Rpidout', Int32)
     self.lVelPub = rospy.Publisher('Lvel', Int32)
     self.rVelPub = rospy.Publisher('Rvel', Int32)

修改def poll(self):函数,在函数头部增加如下代码

if self.debugPID:
     rospy.logdebug("poll start-------------------------------: ")
     try:
         left_pidin, right_pidin = self.arduino.get_pidin()
         self.lEncoderPub.publish(left_pidin)
         self.rEncoderPub.publish(right_pidin)
         rospy.logdebug("left_pidin: "+str(left_pidin))
         rospy.logdebug("right_pidin: "+str(right_pidin))
     except:
         rospy.logerr("getpidin exception count: ")
         return

     try:
         left_pidout, right_pidout = self.arduino.get_pidout()
         self.lPidoutPub.publish(left_pidout)
         self.rPidoutPub.publish(right_pidout)
         rospy.logdebug("left_pidout: "+str(left_pidout))
         rospy.logdebug("right_pidout: "+str(right_pidout))
     except:
         rospy.logerr("getpidout exception count: ")
         return

修改def poll(self):函数,在if not self.stopped:后面增加如下代码

if self.debugPID:
     self.lVelPub.publish(self.v_left)
     self.rVelPub.publish(self.v_right)

3.2. 修改上位机arduino_driver.py文件

增加get_pidin和get_pidout两个函数

def get_pidin(self):
    values = self.execute_array('i')
    if len(values) != 2:
         print "pidin was not 2"
         raise SerialException
         return None
    else:
         return values

def get_pidout(self):
    values = self.execute_array('f')
    if len(values) != 2:
         print "pidout was not 2"
         raise SerialException
         return None
    else:
         return values

3.3. 修改Arduino 的commands.h

增加读取pidin,和pidout的命令

#define READ_PIDOUT    'f'
#define READ_PIDIN     'i'

 

3.4. 修改Arduino 的diff_controller.h

在尾部增加readPidIn和readPidOut两个函数

long readPidIn(int i) {
  long pidin=0;
    if (i == LEFT){
    pidin = leftPID.PrevInput;
  }else {
    pidin = rightPID.PrevInput;
  }
  return pidin;
}

long readPidOut(int i) {
  long pidout=0;
    if (i == LEFT){
    pidout = leftPID.output;
  }else {
    pidout = rightPID.output;
  }
  return pidout;
}

 

3.5. 修改Arduino 的ROSArduinoBridge.ino

修改runcammand()函数,在switch部分增加case READ_PIDIN:和case READ_PIDOUT:

  switch (cmd) {
    case GET_BAUDRATE:
      Serial.println(BAUDRATE);
      break;
    case READ_PIDIN:
      Serial.print(readPidIn(LEFT));
      Serial.print(" ");
      Serial.println(readPidIn(RIGHT));
      break;
    case READ_PIDOUT:
      Serial.print(readPidOut(LEFT));
      Serial.print(" ");
      Serial.println(readPidOut(RIGHT));
      break;
     ...

3.6. 执行PID调速

在不同的终端执行下面的命令
启动roscore

roscore

增加路径到bash

. ~/catkin_ws/devel/setup.bash

启动节点

roslaunch ros_arduino_python arduino.launch

这时候我们可以发布Twist消息来控制机器人的运行,如:

rostopic pub /cmd_vel geometry_msgs/Twist -r 1 -- '[0.5, 0.0, 0.0]' '[0.0, 0.0, 0.0]'

启动rqt_plot

rqt_plot

rqt_plot的使用方法,请见官方文档http://wiki.ros.org/rqt_plot
这时候就可以通过rqt_plot所提供的图形化界面,按照如下PID调速口诀进行调试了,

参数整定找最佳,从小到大顺序查
先是比例后积分,最后再把微分加
曲线振荡很频繁,比例度盘要放大
曲线漂浮绕大湾,比例度盘往小扳
曲线偏离回复慢,积分时间往下降
曲线波动周期长,积分时间再加长
曲线振荡频率快,先把微分降下来
动差大来波动慢。微分时间应加长
理想曲线两个波,前高后低4比1
一看二调多分析,调节质量不会低。

Lesson 5: 运动控制-ros_arduino_bridge

ros_arduino_bridge功能包集包括了Arduino库(ROSArduinoBridge)和一系列用来控制基于Arduino的ROS功能包,它使用的是标准的ROS消息和服务。 这个功能包集的功能包括:

  • 支持ping声呐和Sharp红外线传感器
  • 从通用的模拟和数字信号的传感器读取数据
  • 控制数字信号的输出
  • 支持PWM舵机控制
  • base controller

在Diego 1#机器人中我们主要使用的是base controller和PWM舵机的控制,为了适应Diego 1#所选用的硬件环境,我们需要对这个包做必要的修改,接下来我们就一步一步的来安装并改造这个包。

1.  ros_arduino_bridge的安装

ros_arduino_bridge github地址:https://github.com/hbrobotics/ros_arduino_bridge

1.1. 下载
进入你的workspace目录下的src目录,catkin_ws是workspace

cd ~/catkin_ws/src
git clone https://github.com/hbrobotics/ros_arduino_bridge.git

1.2. 编译

在workspace目录编译

cd <catkin_ws>
catkin_make

1.3. 拷贝Arduino库文件

在ros_arduino_bridge目录中将会有一个ros_arduino_firmware子目录,src子目录下就是Arduino的库文件,和示例代码,可以拷贝到相应的Arduino IDE的libraries目录

$ cd SKETCHBOOK_PATH//Arduino IDE的库文件目录
$ cp -rp  `rospack find ros_arduino_firmware`/src/libraries/ROSArduinoBridge -T ROSArduinoBridge

也可以把ROSArduinoBridge拷贝到其他windows, Mac电脑的Arduino IDE环境下使用,重启后既可以用
这里写图片描述

2.开发Arduino 代码

2.1.创建diego 1# Arduino工程

从Example中打开ROSArduinoBridge的示例代码,另存为自己喜欢的项目名称,我们只需要根据自己的需求修改示例代码即可 。
这里写图片描述

2.2.示例代码文件介绍

  • ROSArduinoBridge.ino 主程序
  • commands.h 串口命令的预定义
  • diff_controller.h PID控制代码
  • encoder_driver.h 编码器,这里只是针对了Arduino UNO,使用了中断接口D2,D3,和模拟接口A2,A3;所以电机编码器的输出接线需要按照此规则接线,另外要注意编码器要有两路输出
    左侧电机的编码输出接D2,D3;右侧电机的编码输出接A2,A3
  • encoder_driver.ino 编码器的实现代码
  • motor_driver.h 马达驱动的接口定义,用不动的马达驱动板都要实现此文件定义的三个函数
  • motor_driver.ino马达驱动实现代码,根据预定义选择不同的驱动板库,在这里我使用了L298P,所以需要自己实现一个新的驱动库,后面会介绍
  • sensors.h传感器的实现文件
  • servos.h舵机的实现文件

2.3.代码修改

2.3.1 ROSArduinoBridge.ino文件修改

主要修改的点如下:

启用Base Controller

#define USE_BASE      // Enable the base controller code启用base controller
//#undef USE_BASE     // Disable the base controller code

马达控制板定义

/* Define the motor controller and encoder library you are using */
#ifdef USE_BASE
   /* The Pololu VNH5019 dual motor driver shield */
   //#define POLOLU_VNH5019

   /* The L298P dual motor driver shield,增加我们自己马达控制板L298P,使用我们自己写的L298P库 */
   #define L298P


   /* The Pololu MC33926 dual motor driver shield */
   //#define POLOLU_MC33926

   /* The RoboGaia encoder shield */
   //#define ROBOGAIA

   /* Encoders directly attached to Arduino board,启用Arduino UNO的板载Encoder功能 */
   #define ARDUINO_ENC_COUNTER
#endif

定义电机PWM控制范围

/* Maximum PWM signal */
#define MAX_PWM        255//最大的PWM为255

2.3.2. motor_driver.ino的修改

增加对L298P马达驱动板的支持

/***************************************************************
Motor driver definitions

Add a "#elif defined" block to this file to include support
for a particular motor driver. Then add the appropriate
#define near the top of the main ROSArduinoBridge.ino file.

*************************************************************/
#ifdef USE_BASE

#if defined POLOLU_VNH5019
/* Include the Pololu library */
#include "DualVNH5019MotorShield.h"

/* Create the motor driver object */
DualVNH5019MotorShield drive;

/* Wrap the motor driver initialization */
void initMotorController() {
drive.init();
}

/* Wrap the drive motor set speed function */
void setMotorSpeed(int i, int spd) {
if (i == LEFT) drive.setM1Speed(spd);
else drive.setM2Speed(spd);
}

// A convenience function for setting both motor speeds
void setMotorSpeeds(int leftSpeed, int rightSpeed) {
setMotorSpeed(LEFT, leftSpeed);
setMotorSpeed(RIGHT, rightSpeed);
}
#elif defined POLOLU_MC33926
/* Include the Pololu library */
#include "DualMC33926MotorShield.h"

/* Create the motor driver object */
DualMC33926MotorShield drive;

/* Wrap the motor driver initialization */
void initMotorController() {
drive.init();
}

/* Wrap the drive motor set speed function */
void setMotorSpeed(int i, int spd) {
if (i == LEFT) drive.setM1Speed(spd);
else drive.setM2Speed(spd);
}

// A convenience function for setting both motor speeds
void setMotorSpeeds(int leftSpeed, int rightSpeed) {
setMotorSpeed(LEFT, leftSpeed);
setMotorSpeed(RIGHT, rightSpeed);
}
#elif defined L298P
#include "DualL298PMotorShield.h"

/* Create the motor driver object */
DualL298PMotorShield drive;
/* Wrap the motor driver initialization */
void initMotorController() {
drive.init();
}

/* Wrap the drive motor set speed function */
void setMotorSpeed(int i, int spd) {
if (i == LEFT) drive.setM1Speed(spd);
else drive.setM2Speed(spd);
}

// A convenience function for setting both motor speeds
void setMotorSpeeds(int leftSpeed, int rightSpeed) {
setMotorSpeed(LEFT, leftSpeed);
setMotorSpeed(RIGHT, rightSpeed);
}

#else
#error A motor driver must be selected!
#endif

#endif

2.3.3. ecoder_driver.h修改

/* *************************************************************
Encoder driver function definitions - by James Nugen
************************************************************ */

#ifdef ARDUINO_ENC_COUNTER
//below can be changed, but should be PORTD pins;
//otherwise additional changes in the code are required
#define LEFT_ENC_PIN_A PD2 //pin 2
#define LEFT_ENC_PIN_B PD3 //pin 3

//below can be changed, but should be PORTC pins
#define RIGHT_ENC_PIN_A PC2 //pin A2 右侧马达霍尔编码器信号2输出接Arduino A2
#define RIGHT_ENC_PIN_B PC3 //pin A3 右侧马达霍尔编码器信号2输出接Arduino A3
#endif
long readEncoder(int i);
void resetEncoder(int i);
void resetEncoders();

2.3.4. ecoder_driver.ino修改

/* Interrupt routine for RIGHT encoder, taking care of actual counting */
ISR (PCINT1_vect){
static uint8_t enc_last=0;

enc_last <<=2; //shift previous state two places
enc_last |= (PINC & (3 << 2)) >> 2; //read the current state into lowest 2 bits 使用的A2,A3这里对应要做相应的修改

right_enc_pos += ENC_STATES[(enc_last & 0x0f)];
//right_enc_pos=enc_last;
}

2.3.5. L298P的驱动库

把.h和.cpp文件放在同一个目录下,拷贝到Arduino IDE的库文件目录下就可以。 DualL298PMotorShield.h代码

#ifndef DualL298PMotorShield_h
#define DualL298PMotorShield_h

#include <Arduino.h>

class DualL298PMotorShield
{
  public:  
    // CONSTRUCTORS
    DualL298PMotorShield(); // Default pin selection.
    DualL298PMotorShield(unsigned char M1DIR, unsigned char M1PWM,
                           unsigned char M2DIR, unsigned char M2PWM); // User-defined pin selection. 

    // PUBLIC METHODS
    void init(); // Initialize TIMER 1, set the PWM to 20kHZ. 
    void setM1Speed(int speed); // Set speed for M1.
    void setM2Speed(int speed); // Set speed for M2.
    void setSpeeds(int m1Speed, int m2Speed); // Set speed for both M1 and M2.

  private:
//这里定义了L298P的控制引脚,读者可以根据自己买的L298P灵活修改 static const unsigned char _M1DIR = 4;//左侧马达转动方向,接Arduino D4 static const unsigned char _M2DIR = 7;//右侧马达转动方向,接Arduino D7 static const unsigned char _M1PWM = 5;//左侧马达PWM,接Arduino D5 static const unsigned char _M2PWM = 6;//右侧马达PWM,接Arduino D6 }; #endif

DualL298PMotorShield.cpp代码

#include "DualL298PMotorShield.h"

// Constructors ////////////////////////////////////////////////////////////////

DualL298PMotorShield::DualL298PMotorShield()
{
  //Pin map

}


// Public Methods //////////////////////////////////////////////////////////////
void DualL298PMotorShield::init()
{
// Define pinMode for the pins and set the frequency for timer1.

  pinMode(_M1DIR,OUTPUT);
  pinMode(_M1PWM,OUTPUT);
  pinMode(_M2DIR,OUTPUT);
  pinMode(_M2PWM,OUTPUT);

}
// Set speed for motor 1, speed is a number betwenn -400 and 400
void DualL298PMotorShield::setM1Speed(int speed)
{
  unsigned char reverse = 0;

  if (speed < 0)
  {
    speed = -speed;  // Make speed a positive quantity
    reverse = 1;  // Preserve the direction
  }
  if (speed > 255)  // Max PWM dutycycle
    speed = 255;
  if (reverse)
  {
    digitalWrite(_M1DIR,LOW);
    analogWrite(_M1PWM, speed);
  }
  else
  {
    digitalWrite(_M1DIR,HIGH);
    analogWrite(_M1PWM, speed);
  }    
}

// Set speed for motor 2, speed is a number betwenn -400 and 400
void DualL298PMotorShield::setM2Speed(int speed)
{
  unsigned char reverse = 0;

  if (speed < 0)
  {
    speed = -speed;  // Make speed a positive quantity
    reverse = 1;  // Preserve the direction
  }
  if (speed > 255)  // Max PWM dutycycle
    speed = 255;
  if (reverse)
  {
    digitalWrite(_M2DIR,LOW);
    analogWrite(_M2PWM, speed);
  }
  else
  {
    digitalWrite(_M2DIR,HIGH);
    analogWrite(_M2PWM, speed);
  }
}

// Set speed for motor 1 and 2
void DualL298PMotorShield::setSpeeds(int m1Speed, int m2Speed)
{
  setM1Speed(m1Speed);
  setM2Speed(m2Speed);
}

修改完成后变可以编译upload到Arduino UNO上了。

3.ROS上位机开发

3.1.配置你的机器人参数
进入配置文件目录

$ roscd ros_arduino_python/config

拷贝一份新的配置文件

$ cp arduino_params.yaml my_arduino_params.yaml

用nano打开编辑

sudo nano my_arduino_params.yaml

修改后的my_arduino_params.yaml如下图,主要修改就是启用base Controller,修改PID参数,修改机器人的参数:

# For a direct USB cable connection, the port name is typically
# /dev/ttyACM# where is # is a number such as 0, 1, 2, etc
# For a wireless connection like XBee, the port is typically
# /dev/ttyUSB# where # is a number such as 0, 1, 2, etc.

port: /dev/ttyACM0
baud: 57600
timeout: 0.1

rate: 50
sensorstate_rate: 10

use_base_controller: True
base_controller_rate: 10

# For a robot that uses base_footprint, change base_frame to base_footprint
base_frame: base_link

# === Robot drivetrain parameters
wheel_diameter: 0.02900 #轮胎直径
wheel_track: 0.18 #两个轮胎间距
encoder_resolution: 2 # 码盘孔数
gear_reduction: 75.0 #转速比
motors_reversed: True

# === PID parameters
Kp: 10
Kd: 12
Ki: 0
Ko: 50
accel_limit: 1.0

# === Sensor definitions.  Examples only - edit for your robot.
#     Sensor type can be one of the follow (case sensitive!):
#     * Ping
#     * GP2D12
#     * Analog
#     * Digital
#     * PololuMotorCurrent
#     * PhidgetsVoltage
#     * PhidgetsCurrent (20 Amp, DC)



sensors: {
  #motor_current_left:   {pin: 4, type: PololuMotorCurrent, rate: 5},
  #motor_current_right:  {pin: 7, type: PololuMotorCurrent, rate: 5},
  #ir_front_center:      {pin: 2, type: GP2D12, rate: 10},
  #sonar_front_center:   {pin: 5, type: Ping, rate: 10},
  arduino_led:          {pin: 13, type: Digital, rate: 5, direction: output}
}

修改完成后,既可以运行了

3.2.运行测试

启动roscore

roscore

增加路径到bash

. ~/catkin_ws/devel/setup.bash

启动节点

roslaunch ros_arduino_python arduino.launch

启动后的截图
这里写图片描述

这时候我们可以发布Twist消息来控制机器人的运行,如:

rostopic pub /cmd_vel geometry_msgs/Twist -r 1 -- '[2.0, 0.0, 0.0]' '[0.0, 0.0, 1.8]'

运行此命令,机器人会原地打转,用如下命令查看odom的信息,此信息会不断的变化


rostopic echo /odom
这里写图片描述

至此机器人已经可以按照Twist消息进行控制,发布供move base使用的odom信息,由于底盘使用的两轮驱动的履带底盘,这是典型的差速控制底盘,但由于电机的特性,机器人的载重,安装等问题,会导致两个马达实际速度与期望值不符,出现不能走直线的情况,下篇课程我们将继续修改ros_arduino_bridge功能包,实现针对两个马达的双PID调速机制。

Lesson 4: Move control-差速底盘运动控制模型

差速控制底盘的运动,完全依赖于对两个驱动轮的控制,基本的控制方法如下:

  • 前进:控制两个驱动轮同时朝前转动
  • 后退:控制两个驱动轮同时朝后转动
  • 左转:控制左侧驱动轮速度小于右侧驱动轮速度
  • 右转:控制左侧驱动轮速度大于右侧驱动轮速

在实际安装中,一般两个驱动马达刚好对称安装,这时候一般要把其中一个马达的控制线的正负极反向安装,才能达到上面的控制效果,具体连接方法可以参见Lesson 2: Hardware diagram。只有理解相应的运动学模型,我们才能做的精确的控制,下面我们开始讨论差速底盘的运动学模型

1.运动学模型

1.1 右手坐标系

机器人的运动一定是在一定的坐标系中进行的,而ROS使用的是右手坐标系,所以我们需要首先了解一下右手坐标系

右手坐标系如上图所示,右手握拳,大拇指指向了Z轴,食指指向了X轴,中指指向了Y轴,对应到我们在平面运动的机器人,运动控制就是控制其在世界坐标系的XY轴平面的运动,Z轴方向可以看做是静止的,也就是在运动过程中Z轴方向的线速度永远为0,从机器人坐标系来看Y轴方向的线速度也为0,但角速度则是反过来的,Z轴方向的角速度不为0,而XY轴方向的角速度都为0;

1.2单个驱动轮运动分析

对于履带式的底盘,如果单纯的看一个驱动轮的运动来看,其在世界坐标系内Y轴也是静止的,只有X轴方向的运动,也就是Y轴方向的速度也为0

1.3 双驱动轮差速底盘运动分析

由于两个驱动速度的不同,会使得整个底盘朝着速度慢的一边转向,基于这种原理,我们可以控制两个驱动马达的速度来控制整个机器人的转向,如果我们想控制机器人能够走直线,也就是必须是两个驱动轮的速度是一样的,否则就会转弯。如下图所示,在左右轮YZ方向速度为0的情况下,右侧驱动轮速度大于左侧驱动轮速度,从整个机器人来看将向左转弯。

基于这样的原理在机器人控制的时候,我们会有两个问题需要解决

  • 给定机器人线速度和角速度,如何分解为两个驱动轮的控制参数
  • 如何根据两个驱动轮的速度值,推算出机器人的线速度和角速度,进而推算出里程数据,暨航迹推演

 

2.分解速度到两个驱动轮的控制参数

机器人的速度是指两个相邻的控制时刻之间的速度,由于ROS本身是依赖于高频率的发送控制命令来精确控制机器人的,我们常用的控制频率都是>10hz的,在这么快的频率下,我们可以假设机器人在这么短的时间内走过的弧度用两个位置之间的直线来代替。基于上述假设,我们可以引进机器人坐标系。

假设在机器人坐标系的移动速度为V,时间间隔为Δt。那么在机器人坐标系内行进的距离v*Δt,

假设两个驱动轮的轮距是l,则而左侧轮行进的距离为v*Δt-l*sin(θ),进而推导出左轮的线速度我Vl=(v*Δt-l*sin(θ))/Δt;而右轮的线速度为Vr=(v*Δt+l*sin(θ))/Δt

3.航迹推演

3.1 根据速度推演航迹

 

假设在机器人坐标系的移动速度为V角速度为w,时间间隔为Δt。那么在机器人坐标系内行进的距离v*Δt,那么分解到世界坐标系内x轴和y轴的行进距离就是

Δx=v*Δt*cos(θ)

Δy=v*Δt*sin(θ)

旋转的弧度为:

Δθ=w*Δt

那么一段时间内的累计里程暨为:

x=x+v*Δt*cos(θ)

y=y+v*Δt*sin(θ)

θ=θ+Δθ

以上是根据机器人的速度在理想状态下根据线速度和角速度来推演航迹,但实际情况是机器人并不一定按照设定的控制速度行进,而会有误差,这个时候我们更多的是使用传感器数据来推演航迹,比如激光雷达,深度相机,电机的码盘,超声波,陀螺仪等传感器数据,有的机器人只使用其中一种,高级的机器人则采用多种传感器数据融合算法来提高航程推演的准确性。这里我们介绍最常用的基于电机码盘数据的航迹推演。

3.2 根据码盘数据进行推演航迹

码盘直接给出了两个驱动马达在Δt时间段内各自行进的距离,旋转的角度可以进行如下运算,

sin(Δθ)=((Δxr-Δxl)/2)/l

由于Δt非常短,行进的距离也很短,所以做如下的近似

Δθ=sin(Δθ)=((Δxr-Δxl)/2)/l

整个机器行进的距离则是两个马达行进距离的平均值(Δxl+Δxr)/2

那么映射到世界坐标系上的XY轴则为:

Δx=cos(θ)*(Δxl+Δxr)/2

Δy=sin(θ)*(Δxl+Δxr)/2

那么一段时间内的累计里程暨为:

x=x+cos(θ)*(Δxl+Δxr)/2

y=y+sin(θ)*(Δxl+Δxr)/2

θ=θ+Δθ

理解了运动模型,我们就可以按照模型的原理来控制机器人的运动了。

Lesson 3 软件架构及环境部署

1.软件架构

Diego 1#机器人是基于ROS平台开发的,所有的功能实现都基于ROS的机制,同时集成了许多开源的软件包,下图为软件架构图:

  • 操作系统是基于Ubuntu mate 16.04版本,ROS的版本是kinetic。
  • ros_arduino_bridge:ROS与Arduino通讯的包。

    ros_arduino_firmware:运行在Arduino上,负责根据树莓派相应模块的指令执行相应的控制,如PID的运算,传感器的控制,舵机的控制。

    ros_arduino_node:运行在树莓派上,作为ROS的一个节点运行,提供相应的节点Service和Topic。

    base_controller:运行在树莓派上,订阅Twist消息,并根据机器人的配置,转换成相应的控制命令发送给Arduino执行,同时也发布里程计odom消息,供其他模块使用。

  • SLAM

    Gmapping;可以根据激光雷达数据或者点云数据数据构建室内地图,同时需要odom数据。

    Hector;根据激光雷达数据或者点云数据构建室内地图,不需要odom。

    amcl:根据激光雷达或者点云数据,在现有的地图上进行定位。

    move_base:路径规划,发送Twist消息控制机器人到达指定的位置。

  • Vision

    Openni:深度相机的驱动包,可以驱动xtion和Kinect,产生点云数据。

    usb_cam: usb相机的驱动包。

  • Voice & Speech

    sphinx:离线语音识别合成包,主要针对英文。

    讯飞语音:讯飞的在线语音识别合成系统。

  • moveit:机械臂控制包,可以控制机械臂运行到指定的姿势,同时具有避障的功能。

  • teleop_twist_keyboard-master:通过键盘控制机器人。

 

2.制作树莓派系统镜像

由于树莓派是armhf架构,根据ROS官方安装说明ROS kinetic+Ubuntu Xenial才能支持armhf架构。

而官方建议的 xenial版本是Ubuntu MATE 16.04.2 LTS

所以我们这里就选用官方推荐的版本,以免将来产生未知的问题。

有关ROS安装详细信息可以参考官方的安装说明:http://wiki.ros.org/ROS/Installation

2.1. Ubuntu MATE 16.04.2 LTS下载

下载地址:https://ubuntu-mate.org/download/

这里要选择树莓派专用的版本,然后选择下载方式就可以下载了,下载完成后解压缩就可以得到镜像文件。

2.2. mac 电脑制作树莓派Ubuntu SD

a. 打开一个terminal

b. cd 到镜像文件所在的目录,这里我们把镜像文件放在桌面上

cd Desktop

c.查看sd卡的文件名

diskutil list

d.unmount磁盘

diskutil unmountDisk /dev/disk3

e.执行烧写命令

sudo dd bs=1m if=ubuntu-mate-16.04.2-desktop-armhf-raspberry-pi.img of=/dev/disk3

执行过程如下:

windows系统可以使用Win32DiskImager软件烧写,图形化界面,使用简单,这里就不在介绍。

SD卡制作完成后就可以插入树莓派,第一次开机需要做必要的设置,相关的设置可以参见官方文档:https://ubuntu-mate.org/raspberry-pi/

3.ROS安装

此部分翻译自官方安装说明:http://wiki.ros.org/kinetic/Installation/Ubuntu

3.1设置sources.list

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

 3.2设置key

sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

3.3更新系统

sudo apt-get update

3.4ROS kinetic桌面版安装

由于树莓派的计算能力有限,所以我们选择安装桌面版本,后面根据需要逐步安装需要的包

sudo apt-get install ros-kinetic-desktop

3.5初始化rosdep

sudo rosdep init
rosdep update

3.6环境设置

echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc

至此ROS在树莓派上的安装就已经完成了,可以在直接在树莓派上工作了。

4.ROS 工作目录的创建

开始ROS的开发,首先要创建ROS的工作目录

$ mkdir -p ~/catkin_ws/src
$ cd ~/catkin_ws/src
$ catkin_init_workspace

Posts navigation

1 2 3 4 5 6
Scroll to top