Lesson 5:DiegoAPP–增加高德地图,并在地图上显示机器人轨迹

RobotCA原版的APP是使用国外的地图,在国内访问不了,本文将介绍如何使用高德地图来替代原有的地图,并订阅GPS消息,在地图上显示机器人轨迹

在APP中增加高德地图,我们首先要申请高德的Key,具体的申请方法可以参考高德官方文档https://lbs.amap.com/api/android-sdk/guide/create-project/get-key/

1.在AndroidManifest.xml中增加对高德地图的支持

在下图中增加如红色方框部分代码,其中value是申请好的高德地图key。

2. 高德地图功能的主要实现

这里主要涉及修改三个文件

  • LocationProvider.java
  • FragmentMap.xml
  • MapFragment.java

LocationProvider.java获取ROS GPS消息NavSatFix数据,并更新类对象经纬度数据,由于APP本身已经实现了GPS消息的订阅,所以此文件中只需要实现相应的消息MessageListener,代码逻辑为同过Android内部消息机制,在收到消息后即更新Location变量,同时调用MapFragment变量的AddMarker功能在地图上添加机器人轨迹

public class LocationProvider implements  MessageListener{

    private static final String TAG = "LocationProvider";

    private final Location LOCATION;
    private MapFragment mapFragement;

    /**
     * Default Constructor.
     */
    public LocationProvider() {
        //consumers = new ArrayList<>();

        LOCATION = new Location("ROS");
    }

    @Override
    public void onNewMessage(NavSatFix navSatFix) {
        LOCATION.setLatitude(navSatFix.getLatitude());
        LOCATION.setLongitude(navSatFix.getLongitude());
        mapFragement.addMarker();
        //39.906901,116.397972
    }

    public Location getLocation(){
        return LOCATION;
    }

    public void setMapFragement(MapFragment mapFragement){
        this.mapFragement=mapFragement;
    }
}

资源文件FragmentMap.xml只需要增加com.amap.api.maps.MapView即可

MapFragment.java文件地图显示的逻辑完全是按照百度官方教程编写,详细的方法读者可以参考官方文档,这里只介绍如何根据ROS的GPS消息,在地图上显示机器人轨迹,首先声明LocationProvider变量

public class MapFragment extends Fragment implements AMap.OnMyLocationChangeListener {

    // Log tag String
    private static final String TAG = "MapFragment";
    protected static CameraPosition cameraPosition;
    private AMap aMap;
    private MapView mapView;
    private View mapLayout;
    private LocationProvider locationProvider;

增加addMarker函数,供locationProvider调用

    public void addMarker(){


        if (getCameraPosition() == null) {
            LatLng latLng = new LatLng(locationProvider.getLocation().getLatitude(),locationProvider.getLocation().getLongitude());
            aMap.moveCamera(CameraUpdateFactory.newCameraPosition(new CameraPosition(latLng, 10, 0, 0)));
            aMap.clear();
            final Marker marker = aMap.addMarker(new MarkerOptions().position(latLng).title("DiegoRobot").snippet("DiegoRobot"));
        }else {
            aMap.moveCamera(CameraUpdateFactory.newCameraPosition(getCameraPosition()));
        }
    }

现在MapFragment已经可以显示高德地图,并在地图上绘制机器人的轨迹。

3. 增加显示高德地图的菜单切换功能

这部分功能主要是在ControlAPP.java文件中完成。

首先在onCreate增加对应的资源图标

        int[] imgRes = new int[]{
                R.mipmap.ic_android_black_24dp,
                R.mipmap.ic_view_quilt_black_24dp,
                R.mipmap.ic_linked_camera_black_24dp,
                R.mipmap.ic_linked_camera_black_24dp,
                R.mipmap.ic_linked_camera_black_24dp,
                R.mipmap.ic_navigation_black_24dp,
                R.mipmap.ic_navigation_black_24dp,
                R.mipmap.ic_terrain_black_24dp,
                R.mipmap.ic_settings_black_24dp,
                R.mipmap.ic_info_outline_black_24dp
        };

在selectItem函数增加切换功能

   private void selectItem(int position) {
...
            case 6:
                fragment = new LaserScanMapFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
                currentfragment=6;
                break;
            case 7:
                fragment = new MapFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
               currentfragment=7;
                break;

在资源文件strings.xml文件中增加对应字符串

    
    
        Select Robot
        Overview
        Camera0 view
        Camera1 view
        Double Camera view
        Laser Scan View
        Laser Make Map
        Outdoor Map
        Preferences
        About    

APP已经可以通过菜单切换高德地图

Lesson 3:DiegoAPP–机器人控制

机器人的控制主要涉及到如下几个文件

  • ControlApp,在连接到Robot后的主界面,执行控制任务,及执行各种功能的Fragment
  • RobotController,主要的机器人控制代码,在此文件中定义了要接收发送的ROS Topic,以及接收后的处理方式
  • Fragement,执行各主要功能的的Fragment,通过ControlApp进行切换

 

1. RobotController主要功能说明

进入RobotController后,表示APP已经和机器人连接,可以通过APP对机器人进行控制,对机器人状态进行监控,主要功能如下:

  • ODOM信息的监控,机器的线速度,和角速度的监控
  • GPS信息的监控,显示当前GPS定位信息
  • 监控机器人的电池电量
  • 接收显示,激光雷达,双摄像头,地图的信息
  • 发送cmd_vel信息,通过遥控杆,手机重力感应来控制
  • 高德地图,通过接收机器人发送来的GPS信息,在高德地图上实时显示

2. ControllApp代码主要逻辑说明

ControllApp继承自RosActivity,并实现了ListView.OnItemClickListener接口,声明了Fragment,Robotinfo,RobotController等变量,变量声明部分代码如下:

public class ControlApp extends RosActivity implements ListView.OnItemClickListener,
        IWaypointProvider, AdapterView.OnItemSelectedListener {

    /** Notification ticker for the App */
    public static final String NOTIFICATION_TICKER = "Diego Robot";
    /** Notification title for the App */
    public static final String NOTIFICATION_TITLE = "Diego Robot";

    /** The RobotInfo of the connected Robot */
    public static RobotInfo ROBOT_INFO;

    // Variables for managing the DrawerLayout
    private String[] mFeatureTitles;
    private DrawerLayout mDrawerLayout;
    private ListView mDrawerList;
    private ActionBarDrawerToggle mDrawerToggle;

    // NodeMainExecutor encapsulating the Robot's connection
    private NodeMainExecutor nodeMainExecutor;
    // The NodeConfiguration for the connection
    private NodeConfiguration nodeConfiguration;

    // Fragment for the Joystick
    private JoystickFragment joystickFragment;
    // Fragment for the HUD
    private HUDFragment hudFragment;


    // The RobotController for managing the connection to the Robot
    private RobotController controller;
    // The WarningSystem used for detecting imminent collisions
    private WarningSystem warningSystem;

    // Stuff for managing the current fragment
    private Fragment fragment = null;
    FragmentManager fragmentManager;
    int fragmentsCreatedCounter = 0;

    private int currentfragment=0;

    // For enabling/disabling the action menu
//    private boolean actionMenuEnabled = true;
    // The ActionBar spinner menu
    private Spinner actionMenuSpinner;

    // The index of the currently visible drawer
    private int drawerIndex = 1;

    // Log tag String
    private static final String TAG = "ControlApp";

    // List of waypoints
    private final LinkedList waypoints;
    // Specifies how close waypoints need to be to be considered touching
    private static final double MINIMUM_WAYPOINT_DISTANCE = 1.0;

    // Laser scan map // static so that it doesn't need to be saved/loaded every time the screen rotates
    private static LaserScanMap laserScanMap;



    // Bundle keys
    private static final String WAYPOINT_BUNDLE_ID = "com.diegorobot.app.waypoints";
    private static final String SELECTED_VIEW_NUMBER_BUNDLE_ID = "com.diegorobot.app.drawerIndex";
    private static final String CONTROL_MODE_BUNDLE_ID = "com.diegorobot.app.Views.Fragments.JoystickFragment.controlMode";

    // The saved instance state
    private Bundle savedInstanceState;

在ControlApp变量声明部分,主要的变量有:

  • ROBOT_INFO, 连接的机器人信息类,从保存的机器人数据中读取,保存修改的机器人信息
  • mFeatureTitles加载左侧滑动菜单的内容,详细的定义在资源文件strings.xml中描述
  • nodeMainExecutor,连接ROSMaster的Main node,这个是RosJava连接RosMaster必须的变量,通过此变量连接Master
  • joystickFragment,遥控杆Fragement
  • hudFragment,抬头显示的Fragment,显示ODOM, GPS, BATTERY,已经急停按钮
  • controller,机器人的控制变量, 通过接收和发送RosTopic来控制机器
  • fragment,当前界面显示Fragment

onCreate()函数初始化了相关的变量,和Android APP一般的onCreate函数没有什么太大的区别,读者如有疑惑,请学习Android App的开放教程,这里不在说明
这里主要说明一下selectItem()函数,这个函数在单击事件中被调用,用于切换Fragement,代码如下:

    private void selectItem(int position) {

        Bundle args = new Bundle();

        if (joystickFragment != null && getControlMode().ordinal() <= ControlMode.Tilt.ordinal()) {
            joystickFragment.show();
        }

        if (hudFragment != null) {
            hudFragment.show();
        }

        if (controller != null) {
            controller.initialize();
        }
        fragmentManager = getFragmentManager();

        setActionMenuEnabled(true);

        switch (position) {
            case 0:
                Log.d(TAG, "Drawer item 0 selected, finishing");

                fragmentsCreatedCounter = 0;
                currentfragment=0;

                int count = fragmentManager.getBackStackEntryCount();
                for (int i = 0; i < count; ++i) {
                    fragmentManager.popBackStackImmediate();
                }

                if (controller != null) {
                    controller.shutdownTopics();

                    new AsyncTask<Void, Void, Void>() {
                        @Override
                        protected Void doInBackground(Void... params) {
                            nodeMainExecutor.shutdownNodeMain(controller);
                            return null;
                        }
                    }.execute();
                }

                finish();

                return;

            case 1:
                fragment = new OverviewFragment();
                fragmentsCreatedCounter = 0;
                currentfragment=1;
                break;

            case 2:
                //fragment = new CameraViewFragment();
                fragment=new CameraTopViewFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
                currentfragment=2;
                break;

            case 3:
                //fragment = new CameraViewFragment();
                fragment=new Camera1TopViewFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
                currentfragment=3;
                break;

            case 4:
                //fragment = new CameraViewFragment();
                fragment=new CameraDoubleViewFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
                currentfragment=4;
                break;

            case 5:
                fragment = new LaserScanFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
                currentfragment=5;
                break;
            case 6:
                fragment = new LaserScanMapFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
                currentfragment=6;
                break;
            case 7:
                fragment = new MapFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
               currentfragment=7;
                break;

            case 8://modify by william
                if (joystickFragment != null)
                    joystickFragment.hide();
                if (hudFragment != null) {
                    hudFragment.hide();

                    boolean stop = controller.getMotionPlan() == null || !controller.getMotionPlan().isResumable();
                    stop &= !controller.hasPausedPlan();
                    hudFragment.toggleEmergencyStopUI(stop);
                }

                setActionMenuEnabled(false);
                stopRobot(false);

                fragment = new PreferencesFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
                currentfragment=8;
                break;

            case 9:
                if (joystickFragment != null)
                    joystickFragment.hide();
                if (hudFragment != null) {
                    hudFragment.hide();

                    boolean stop = controller.getMotionPlan() == null || !controller.getMotionPlan().isResumable();
                    stop &= !controller.hasPausedPlan();
                    hudFragment.toggleEmergencyStopUI(stop);
                }

                setActionMenuEnabled(false);
                stopRobot(false);

                fragment = new AboutFragment();
                fragmentsCreatedCounter = fragmentsCreatedCounter + 1;
                currentfragment=9;

            default:
                break;
        }

        drawerIndex = position;

        try {
            //noinspection ConstantConditions
            ((RosFragment) fragment).initialize(nodeMainExecutor, nodeConfiguration);
        } catch (Exception e) {
            // Ignore
        }

        if (fragment != null) {
            fragment.setArguments(args);

            // Insert the fragment by replacing any existing fragment
            fragmentManager.beginTransaction()
                    .replace(R.id.content_frame, fragment)
                    .commit();

            if (fragment instanceof Savable && savedInstanceState != null)
                ((Savable) fragment).load(savedInstanceState);
        }

        // Highlight the selected item, update the title, and close the drawer
        mDrawerList.setItemChecked(position, true);
        mDrawerLayout.closeDrawer(mDrawerList);
        setTitle(mFeatureTitles[position]);

        // Refresh waypoints
        new AsyncTask<Void, Void, Void>() {
            @Override
            protected Void doInBackground(Void... params) {
                try {
                    Thread.sleep(100L);
                } catch (InterruptedException ignore) {}
                waypointsChanged();
                return null;
            }
        }.execute();
    }

此段代码根据用户点击菜单的item来选择不同的Fragment并初始化,然后切换主界面的Fragment;后续章节将详细介绍主要的Fragment.

3. RobotController 代码主要逻辑讲解

ROS机器人的监控,主要是通过发送和接收RosTopic来实现的,所以在RobotController也主要是实现RosTopic的功能,在变量声明部分为每种要处理的topic都定义了相关的订阅变量,如下图

图中红色框为定义的Twist消息发布变量,绿色框是定义的订阅的消息变量,读者可以根据自己的需求来订阅不同的Topic,扩展APP功能

消息的处理主要在refreshTopics()函数中处理,这段代码初始化了消息订阅和发布的逻辑,由于代码太长就不全贴出来,只把主要的代码逻辑贴出来说明

        // Get the correct topic names
        String moveTopic = PreferenceManager.getDefaultSharedPreferences(context)
                .getString(context.getString(R.string.prefs_joystick_topic_edittext_key),
                        context.getString(R.string.joy_topic));

        String navSatTopic = PreferenceManager.getDefaultSharedPreferences(context)
                .getString(context.getString(R.string.prefs_navsat_topic_edittext_key),
                        context.getString(R.string.navsat_topic));

        String batteryTopic = PreferenceManager.getDefaultSharedPreferences(context)
                .getString(context.getString(R.string.prefs_battery_topic_edittext_key),
                        context.getString(R.string.battery_topic));

这段代码,从配置信息中读取Topic的名称

        // Refresh the Move Publisher
        if (movePublisher == null
                || !moveTopic.equals(movePublisher.getTopicName().toString())) {

            if (publisherTimer != null) {
                publisherTimer.cancel();
            }

            if (movePublisher != null) {
                movePublisher.shutdown();
            }

            // Start the move publisher
            movePublisher = connectedNode.newPublisher(moveTopic, Twist._TYPE);
            currentVelocityCommand = movePublisher.newMessage();

            publisherTimer = new Timer();
            publisherTimer.schedule(new TimerTask() {
                @Override
                public void run() { if (publishVelocity) {
                    movePublisher.publish(currentVelocityCommand);
                }
                }
            }, 0, 80);
            publishVelocity = false;
        }

这段代码初始化了Twist的发布逻辑,通过定时器来不断发送当前的Twist消息

        // Refresh the NavSat Subscriber
        if (navSatFixSubscriber == null
                || !navSatTopic.equals(navSatFixSubscriber.getTopicName().toString())) {

            if (navSatFixSubscriber != null)
                navSatFixSubscriber.shutdown();

            // Start the NavSatFix subscriber
            navSatFixSubscriber = connectedNode.newSubscriber(navSatTopic, NavSatFix._TYPE);
            navSatFixSubscriber.addMessageListener(new MessageListener() {
                @Override
                public void onNewMessage(NavSatFix navSatFix) {
                    setNavSatFix(navSatFix);
                }
            });
        }

这段代码初始化了GPS Topic的订阅逻辑,收到GPSTopic后,在通过Android的消息机制来传递,只要需要用到GPS Topic的代码实现此消息接口,便可实施活动GPS信息,并处理。

Lesson 2:DiegoAPP–机器人设置,及选择

1.两个Activity

App有两个Activity

  • RobotChooser, 作为APP运行起来的Activity,配置为Main Action, 为用户呈现一个新增机器人,配置配置机器人的界面
  • ControlApp,在连接到Robot后的主界面,执行控制任务,及Ros Topic数据的接收,并显示在界面上

我们需要在项目的 文件中配置两个Activity,并吧RobotChooser配置为Main,如下图所示:

2. RobotChooser功能说明

RobotChooser为用户进入APP的第一个页面,主要提供如下功能

  • 新增Robot,用户可以通过右上角的加号按钮来添加Robot,一个Robot代表一个Ros Master
  • Robot 列表,用户可以添加多个Robot Master,并列表显示
  • Robot 编辑,用户点击修改按钮,即可进入Robot信息的编辑页面,可以修改Master ip,及topic的名称
  • Robot 删除,用户也可以点击删除按钮删除Robot
  • Robot在线状态,Robot 列表中的信号图标如果是灰色,则表明Robot不在线,否则说明Robot是在线状态,可以连接
  • 连接到Robot,当Robot处于在线状态的情况下,可以点击Robot项,直接连接到Robot,后跳转到控制页面

3. RobotChooser 代码主要逻辑说明

RobotChooser继承自AppCompatActivity,并实现了相应的接口,代码如下:

public class RobotChooser extends AppCompatActivity implements AddEditRobotDialogFragment.DialogListener,
        ConfirmDeleteDialogFragment.DialogListener, ListView.OnItemClickListener {

    /** Key for whether this is the first time the app has been launched */
    public static final String FIRST_TIME_LAUNCH_KEY = "FIRST_TIME_LAUNCH";

    private RecyclerView mRecyclerView;
    private RecyclerView.Adapter mAdapter;

    private ShowcaseView showcaseView;
    private Toolbar mToolbar;

    private ActionBarDrawerToggle mDrawerToggle;

    // Variables for keeping track of Fragments
    private Fragment fragment = null;
    private FragmentManager fragmentManager;
    private int fragmentsCreatedCounter = 0;

    // Log tag String
    private static final String TAG = "RobotChooser";

在RobotChooser变量声明部分,两个主要的变量是:

  • mRecyclerView, 这个是主要操作的界面视图类
  • mAdapter,这个类是Robot工具item,删除,编辑,连接到Robot的主要功能实现,对应的文件是Core/RobotInfoAdapter
  • showcaseView, 使用ShowcaseView实现在Robot列表中还没有添加Robot的情况下,实现操作引导界面

Android APP Activity的初始化代码一般放在onCreate(), RobotChooser的代码如下:

    
public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        try {
            this.setContentView(R.layout.robot_chooser);
        }
        catch(Exception e){}
        mRecyclerView = (RecyclerView) findViewById(R.id.robot_recycler_view);

        RecyclerView.LayoutManager mLayoutManager = new LinearLayoutManager(this);
        mRecyclerView.setItemAnimator(new DefaultItemAnimator());
        mRecyclerView.setLayoutManager(mLayoutManager);

        mToolbar = (Toolbar) findViewById(R.id.robot_chooser_toolbar);
        setSupportActionBar(mToolbar);

        RobotStorage.load(this);

        if (getActionBar() != null) {
            getActionBar().setDisplayHomeAsUpEnabled(true);
            getActionBar().setHomeButtonEnabled(true);
        }

        // Adapter for creating the list of Robot options
        mAdapter = new RobotInfoAdapter(this, RobotStorage.getRobots());

        mRecyclerView.setAdapter(mAdapter);

        ScheduledExecutorService worker = Executors.newSingleThreadScheduledExecutor();

        // Check whether this is the first time the app has been launched on this device
        final boolean isFirstLaunch = PreferenceManager
                .getDefaultSharedPreferences(this)
                .getBoolean(FIRST_TIME_LAUNCH_KEY, true);

        // Delay the initial tutorial a little bit
        // This makes sure the view gets a good reference to the UI layout positions
        Runnable task = new Runnable() {
            public void run() {
            runOnUiThread(new Runnable() {
                @Override
                public void run() {
                    try {
                        if (RobotStorage.getRobots().size() == 0 && isFirstLaunch) {
                            //Show initial tutorial message
                            showcaseView = new ShowcaseView.Builder(RobotChooser.this)
                                .setTarget(new ToolbarActionItemTarget(mToolbar, R.id.action_add_robot))
                                .setStyle(R.style.CustomShowcaseTheme2)
                                .hideOnTouchOutside()
                                .blockAllTouches()
                                //.singleShot(0) Can use this instead of manually saving in preferences
                                .setContentTitle(R.string.splash_add_robot_title)
                                .setContentText(R.string.splash_add_robot_text)
                                .build();

                            //Get ready to show tutorial message when user adds a robot
                            setupNextTutorialMessage();
                        }
                    } catch (Exception ignore) {}
                }
            });
            }
        };

        worker.schedule(task, 1, TimeUnit.SECONDS);
    }

在这段代码中首先指定了Activity对应的res id,初始化了mRecyclerView 等变量,这里需要关注的是RobotStorage.load(this)载入了已经配置好的Robot信息,返回一个Robot列表,并显示在主界面上。
isFirstLaunch 变量定义了用户是不是第一次打开APP,同时通过runOnUiThread运行一个独立的线程来执行ShowCaseView,实现功能引导,显示功能引导界面的条件是用户第一次使用,或者Robot list中没有Robot。
RobotChooser其他部分代码主要是正对Robot的增删改操作,及一些消息的传递,这里不在讲解。

4. RobotInfoAdapter 代码主要逻辑讲解

在Robot chooser界面针对当Robot的操作都是在RobotInfoAdapter中实现的,此文件位于Core文件夹下。其中内嵌了类ViewHolder来指定res,同时实现操作,其代码如下:

 public ViewHolder(View v) {
            super(v);
            v.setClickable(true);
            v.setOnClickListener(this);
            mRobotNameTextView = (TextView) v.findViewById(R.id.robot_name_text_view);
            mMasterUriTextView = (TextView) v.findViewById(R.id.master_uri_text_view);

            mEditButton = (ImageButton) v.findViewById(R.id.robot_edit_button);
            mEditButton.setOnClickListener(this);

            mDeleteButton = (ImageButton) v.findViewById(R.id.robot_delete_button);
            mDeleteButton.setOnClickListener(this);

            mImageView = (ImageView) v.findViewById(R.id.robot_wifi_image);
            mImageView.setImageResource(R.mipmap.wifi_0);

            Timer t = new Timer();

            t.scheduleAtFixedRate(new TimerTask() {

                @Override
                public void run() {
                    try {
                        int position = getAdapterPosition();
                        final RobotInfo info = mDataset.get(position);
                        //mImageView.setLayoutParams(new ActionBar.LayoutParams(mEditButton.getHeight(), mEditButton.getHeight()));

                        if (isPortOpen(info.getUri().getHost(), info.getUri().getPort(), 10000)) {
                            activity.runOnUiThread(new Runnable() {

                                @Override
                                public void run() {
                                    mImageView.setImageResource(R.mipmap.wifi_4);
                                }
                            });
                        } else {
                            activity.runOnUiThread(new Runnable() {

                                @Override
                                public void run() {
                                    mImageView.setImageResource(R.mipmap.wifi_0);
                                }
                            });
                        }

                        Thread.sleep(10000);
                    } catch (Exception ignore) {

                    }
                }
            }, 1000, 15000);
        }

        /**
         * Handles clicks on the RobotInfoAdapter.ViewHolder.
         *
         * @param v The clicked View. Can be either the edit button, delete button, or the adapter itself,
         *          in which case a connection is initiated to the RobotInfo contained in this Adapter
         */
        @Override
        public void onClick(View v) {
            int position = getAdapterPosition();
            Bundle bundle;
            final RobotInfo info = mDataset.get(position);

            switch (v.getId()) {
                case R.id.robot_edit_button:
                    AddEditRobotDialogFragment editRobotDialogFragment = new AddEditRobotDialogFragment();
                    bundle = new Bundle();
                    info.save(bundle);
                    bundle.putInt(AddEditRobotDialogFragment.POSITION_KEY, position);
                    editRobotDialogFragment.setArguments(bundle);

                    editRobotDialogFragment.show(activity.getSupportFragmentManager(), "editrobotialog");
                    break;

                case R.id.robot_delete_button:
                    ConfirmDeleteDialogFragment confirmDeleteDialogFragment = new ConfirmDeleteDialogFragment();
                    bundle = new Bundle();

                    bundle.putInt(ConfirmDeleteDialogFragment.POSITION_KEY, position);
                    bundle.putString(ConfirmDeleteDialogFragment.NAME_KEY, info.getName());
                    confirmDeleteDialogFragment.setArguments(bundle);

                    confirmDeleteDialogFragment.show(activity.getSupportFragmentManager(), "deleterobotdialog");
                    break;

                default:

                    FragmentManager fragmentManager = activity.getFragmentManager();
                    ConnectionProgressDialogFragment f = new ConnectionProgressDialogFragment(info);
                    f.show(fragmentManager, "ConnectionProgressDialog");

                    break;
            }
        }
    }

在此类中,除了在开头部分指定了资源,主要有两部功能:

  • 启动一个定时器任务,每10000毫秒检查一次Robot的在线状态,并同时更新界面的在线状态图标
  • 处理onClick事件,根据用户点击的按钮执行相应的操作,当用户点击的是整个Robot的item时,则通过RobotinforAdapter的run函数跳转到ControlApp Action,界面切换为Robot的控制界面。

 

private void run()
        {
            thread = new Thread(new Runnable() {
                @Override
                public void run() {
                    try {
                        if(!isPortOpen(INFO.getUri().getHost(), INFO.getUri().getPort(), 10000)){
                            throw new Exception(getActivity().getString(R.string.cannot_connect_ros));
                        }

                        final Intent intent = new Intent(activity, ControlApp.class);

                        // !!!---- EVIL USE OF STATIC VARIABLE ----!! //
                        // Should not be doing this but there is no other way that I can see -Michael
                        ControlApp.ROBOT_INFO = INFO;

                        dismiss();

                        activity.runOnUiThread(new Runnable() {
                            @Override
                            public void run() {
                                activity.startActivity(intent);
                            }
                        });
                    }
                    catch (final NetworkOnMainThreadException e){
                        dismiss();

                        activity.runOnUiThread(new Runnable() {
                            @Override
                            public void run() {
                                Toast.makeText(activity, "Invalid Master URI", Toast.LENGTH_LONG).show();
                            }
                        });
                    }
                    catch (InterruptedException e)
                    {
                        // Ignore
                        Log.d(TAG, "interrupted");
                    }
                    catch (final Exception e) {

                        if (ConnectionProgressDialogFragment.this.getFragmentManager() != null)
                            dismiss();

                        activity.runOnUiThread(new Runnable() {
                                @Override
            public void run() {
                            Toast.makeText(activity, e.getMessage(), Toast.LENGTH_LONG).show();
                                }
                        });
                    }
                }
            });

            thread.start();
        }

 

Lesson 1:DiegoAPP–整体架构介绍

1.开发环境

RobotCA是采用Eclipse进行开发,在DiegoAPP中将整个项目迁移到了Android Studio 3.4开发

Android Studio中Gradle 和Gradle Plugin的版本入下图,Gradle和Gradle Plugin的版本非常重要,其他的版本会出现一些编译错误。

Diego APP支持了Android8.0,Android8.1,Android9.0版本,适应现在主流的Android手机,其他版本的也可以下载最新的Android SDK进行编译,但有由于Android升级过程中接口函数会有所变化,故可能会影响正常的运行,如闪退等异常。

2.项目架构

Diego APP的目录组织结构符合Android Studio APP的项目组织规范

com.diegorobot.app是主要的程序目录

  • Core:主要的控制逻辑代码,包括Robot的控制,Map,导航等。
  • Dialogs:对话框的逻辑代码,对应res中的对话框资源
  • Fragments:对应与res中视图的控制代码
  • Layers:针对需要渲染的层的逻辑代码,现在只有针对激光雷达数据的渲染代码
  • Views:对应res的视图文件的控制代码
  • ControlAPP:主要的控制Activity,APP在连接进入控制模式后,运行此Activity
  • RobotChooser:APP启动时的Activity,用来配置连接信息,显示Robot列表
  • ToolbarActionItemTarget:Robot列表中的工具条对应的类

2.Build.gradle

App的编译配置文件中定义了App支持的版本,依赖的包等信息,特别是编译时所涉及到包,一定要正确,否则会出现编译异常, build.gradle代码如下

apply plugin: 'com.android.application'

android {
    compileSdkVersion 28
    buildToolsVersion '28.0.3'
    defaultConfig {
        applicationId "com.diegorobot.app"
        minSdkVersion 26
        targetSdkVersion 28
        multiDexEnabled true
        versionCode 1
        versionName "1.0"
        testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
    }
     buildTypes {
         release {
             minifyEnabled false
             proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
         }
     }
    lintOptions {
        checkReleaseBuilds false
        // Or, if you prefer, you can continue to check for errors in release builds,
        // but continue the build even when errors are found:
        abortOnError false
    }
    dexOptions {
        javaMaxHeapSize "4g"
    }
}


configurations {
    all*.exclude group: 'org.ros.rosjava_bootstrap', module: 'gradle_plugins'
}
repositories {
    mavenLocal()
    mavenCentral()
    maven {
        url "https://github.com/rosjava/rosjava_mvn_repo/raw/master"
    }
    //maven { url "https://jitpack.io" }

}

dependencies {
    implementation fileTree(include: ['*.jar'], dir: 'libs')
    implementation 'com.android.support:appcompat-v7:28.0.0'
    implementation 'com.android.support:support-v4:28.0.0'
    implementation 'com.android.support:design:28.0.0'
    implementation 'com.android.support:cardview-v7:28.0.0'
    implementation 'com.android.support.constraint:constraint-layout:1.1.3'
    implementation 'com.android.support:recyclerview-v7:28.0.0'
    implementation 'org.ros.android_core:android_core_components:0.4.0'
    implementation 'com.github.rosjava.android_extras:gingerbread:[0.3,0.4)'
    implementation 'org.ros.rosjava_messages:tf2_msgs:[0.5,0.6)'
    implementation 'com.google.code.gson:gson:2.8.5'
    implementation 'org.apache.commons:commons-lang3:3.9'
    implementation 'com.squareup.okhttp:okhttp:2.7.5'
    implementation 'com.github.amlcurran.showcaseview:library:5.4.3'
    //testCompile 'junit:junit:4.12'
}



dependencies {
    // Required -- JUnit 4 framework
    testImplementation 'junit:junit:4.12'
    // Optional -- Mockito framework
    testImplementation 'org.mockito:mockito-core:2.27.0'

    androidTestImplementation 'com.android.support:support-annotations:28.0.0'
    androidTestImplementation 'com.android.support.test:runner:1.0.2'
    androidTestImplementation 'com.android.support.test:rules:1.0.2'
    // Optional -- Hamcrest library
    androidTestImplementation 'org.hamcrest:hamcrest-library:2.1'
    // Optional -- UI testing with Espresso
    androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.2'
    // Optional -- UI testing with UI Automator
//    androidTestCompile 'com.android.support.test.uiautomator:uiautomator-v15'
}


configurations.all {
    resolutionStrategy.force 'com.google.code.findbugs:jsr305:1.3.9'
    resolutionStrategy.force 'junit:junit:4.12'
}

 

Lesson 26:Diego 1# —物体识别和定位

google最近公布了基于tensorflow物体识别的Api,本文将利用Diego1#的深度摄像头调用物体识别API,在识别物体的同时计算物体与出机器人摄像头的距离。原理如下:

  • Object Detection 订阅Openni发布的Image消息,识别视频帧中的物体
  • Object Depth 订阅Openni发布的Depth Image消息,根据Object Detection识别出的物体列表,对应到Depth Image的位置,计算Object的深度信息
  • Publish Image将视频帧经过处理,增加识别信息Lable后,以Compressed Image消息发布出去,可以方便其他应用订阅

1.创建diego_tensorflow包

由于我们使用的tensorflow,所以我们首先需要安装tensorflow,可以参考tensorflow官方安装说明https://www.tensorflow.org/install/install_linux

Object_detection相关依赖安装见官方安装说明https://github.com/tensorflow/models/blob/master/object_detection/g3doc/installation.md

执行如下命令创建diego_tensorflow包

catkin_create_pkg diego_tensorflow std_msgs rospy roscpp cv_bridge

在diego_tensorflow目录下创建两个子目录

  • scripts:存放相关代码
  • launch:存放launch启动文件

创建完成后diego_tensorflow目录如下图所示:

下载object_detection包:https://github.com/tensorflow/models

下载后将object_detection包上传到diego_tensorflow/scripts目录下,如果自己做模型训练还需有上传slim包到diego_tensorflow/scripts目录下

物体识别的代码都写在ObjectDetectionDemo.py文件中,其中有关识别的代码大部分参考tensorflow官方示例,这里将其包装成为一个ROS节点,并增加物体深度数据的计算

2.ROS节点源代码

#!/usr/bin/env python

import rospy
from sensor_msgs.msg import Image as ROSImage
from sensor_msgs.msg import CompressedImage as ROSImage_C
from cv_bridge import CvBridge
import cv2
import matplotlib
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import uuid
from collections import defaultdict
from io import StringIO
from PIL import Image
from math import isnan

# This is needed since the notebook is stored in the object_detection folder.
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util

class ObjectDetectionDemo():
    def __init__(self):
	rospy.init_node('object_detection_demo')
	
	# Set the shutdown function (stop the robot)
        rospy.on_shutdown(self.shutdown)
        
        self.depth_image =None
        
        self.depth_array = None
        
        model_path = rospy.get_param("~model_path", "")
        image_topic = rospy.get_param("~image_topic", "")
        depth_image_topic = rospy.get_param("~depth_image_topic", "")
        if_down=False
        self.vfc=0
        
        self._cv_bridge = CvBridge()
        
        # What model to download.
	#MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
	#MODEL_NAME='faster_rcnn_resnet101_coco_11_06_2017'
	MODEL_NAME ='ssd_inception_v2_coco_11_06_2017'
	#MODEL_NAME ='diego_object_detection_v1_07_2017'
	#MODEL_NAME ='faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017'
	MODEL_FILE = MODEL_NAME + '.tar.gz'
	DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

	# Path to frozen detection graph. This is the actual model that is used for the object detection.
	PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'

	# List of the strings that is used to add correct label for each box.
	PATH_TO_LABELS = os.path.join(model_path+'/data', 'mscoco_label_map.pbtxt')
	#PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

	NUM_CLASSES = 90
	
	if if_down:
		opener = urllib.request.URLopener()
		opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
		tar_file = tarfile.open(MODEL_FILE)
		for file in tar_file.getmembers():
			file_name = os.path.basename(file.name)
			if 'frozen_inference_graph.pb' in file_name:
        			tar_file.extract(file, os.getcwd())


	rospy.loginfo("begin initilize the tf...")
	self.detection_graph = tf.Graph()
	with self.detection_graph.as_default():
		od_graph_def = tf.GraphDef()
		with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
			serialized_graph = fid.read()
			od_graph_def.ParseFromString(serialized_graph)
			tf.import_graph_def(od_graph_def, name='')

	label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
	categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
	self.category_index = label_map_util.create_category_index(categories)
	
	# Subscribe to the registered depth image
        rospy.Subscriber(depth_image_topic, ROSImage, self.convert_depth_image)
        
        # Wait for the depth image to become available
        #rospy.wait_for_message('depth_image', ROSImage)
	
	self._sub = rospy.Subscriber(image_topic, ROSImage, self.callback, queue_size=1)	
	self._pub = rospy.Publisher('object_detection', ROSImage_C, queue_size=1)
	
	rospy.loginfo("initialization has finished...")
	
	
    def convert_depth_image(self, ros_image):
        # Use cv_bridge() to convert the ROS image to OpenCV format
        # The depth image is a single-channel float32 image
        self.depth_image = self._cv_bridge.imgmsg_to_cv2(ros_image, "32FC1")

        # Convert the depth image to a Numpy array
        self.depth_array = np.array(self.depth_image, dtype=np.float32)
        #print(self.depth_array)
        
    def callback(self,image_msg):
	if self.vfc<12:
		self.vfc=self.vfc+1
	else:
		self.callbackfun(image_msg)
		self.vfc=0	
		    	
    def box_depth(self,boxes,im_width, im_height):
	# Now compute the depth component
        depth=[]
	for row in boxes[0]:
		n_z = sum_z = mean_z = 0
		# Get the min/max x and y values from the ROI
		if row[0]<row[1]:
			min_x = row[0]*im_width
			max_x = row[1]*im_width
		else:
			min_x = row[1]*im_width
			max_x = row[0]*im_width
			
		if row[2]<row[3]:
			min_y = row[2]*im_height
			max_y = row[3]*im_height
		else:
			min_y = row[3]*im_height
			max_y = row[2]*im_height
		# Get the average depth value over the ROI
		for x in range(int(min_x), int(max_x)):
            		for y in range(int(min_y), int(max_y)):
                		try:
					z = self.depth_array[y, x]
				except:
					continue
                
				# Depth values can be NaN which should be ignored
				if isnan(z):
					z=6
					continue
				else:
					sum_z = sum_z + z
					n_z += 1 
			mean_z = sum_z / (n_z+0.01)
		depth.append(mean_z)
	return depth
    def callbackfun(self, image_msg):
	with self.detection_graph.as_default():
		with tf.Session(graph=self.detection_graph) as sess:
			 cv_image = self._cv_bridge.imgmsg_to_cv2(image_msg, "bgr8")
			 #cv_image = (self._cv_bridge.imgmsg_to_cv2(image_msg, "bgr8"))[300:450, 150:380]
			 pil_img = Image.fromarray(cv_image)			 
			 (im_width, im_height) = pil_img.size			 
			 # the array based representation of the image will be used later in order to prepare the
			 # result image with boxes and labels on it.
			 image_np =np.array(pil_img.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)
			 # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
			 image_np_expanded = np.expand_dims(image_np, axis=0)
			 image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
			 # Each box represents a part of the image where a particular object was detected.
			 boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
			 # Each score represent how level of confidence for each of the objects.
			 # Score is shown on the result image, together with the class label.
			 scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
			 classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
			 num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')
			
			 # Actual detection.
			 (boxes, scores, classes, num_detections) = sess.run(
			 	[boxes, scores, classes, num_detections],
			 	feed_dict={image_tensor: image_np_expanded})
			 box_depths=self.box_depth(boxes,im_width,im_height)
			 print(box_depths)
			 # Visualization of the results of a detection.
			 vis_util.visualize_boxes_and_labels_on_image_array(
			 	image_np,
			 	np.squeeze(boxes),
			 	np.squeeze(classes).astype(np.int32),
			 	np.squeeze(scores),
			 	self.category_index,
			 	use_normalized_coordinates=True,
			 	line_thickness=8)
			 
			 ros_compressed_image=self._cv_bridge.cv2_to_compressed_imgmsg(image_np)
			 self._pub.publish(ros_compressed_image)
			
    
    def shutdown(self):
        rospy.loginfo("Stopping the tensorflow object detection...")
        rospy.sleep(1) 
        
if __name__ == '__main__':
    try:
        ObjectDetectionDemo()
        rospy.spin()
    except rospy.ROSInterruptException:
        rospy.loginfo("RosTensorFlow_ObjectDetectionDemo has started.")

下面我们来解释主要的代码逻辑

    def __init__(self):
	rospy.init_node('object_detection_demo')
	
	# Set the shutdown function (stop the robot)
        rospy.on_shutdown(self.shutdown)
        
        self.depth_image =None
        
        self.depth_array = None
        
        model_path = rospy.get_param("~model_path", "")
        image_topic = rospy.get_param("~image_topic", "")
        depth_image_topic = rospy.get_param("~depth_image_topic", "")
        if_down=False
        self.vfc=0
        
        self._cv_bridge = CvBridge()

以上代码是ROS的标准初始化代码,变量的初始化,及launch文件中参数的读取

  • model_path定义object_detection所使用的模型路径
  • image_topic订阅的image主题
  • depth_image_topic订阅的深度image主题
        # What model to download.
	#MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
	#MODEL_NAME='faster_rcnn_resnet101_coco_11_06_2017'
	MODEL_NAME ='ssd_inception_v2_coco_11_06_2017'
	#MODEL_NAME ='diego_object_detection_v1_07_2017'
	#MODEL_NAME ='faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017'
	MODEL_FILE = MODEL_NAME + '.tar.gz'
	DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

	# Path to frozen detection graph. This is the actual model that is used for the object detection.
	PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'

	# List of the strings that is used to add correct label for each box.
	PATH_TO_LABELS = os.path.join(model_path+'/data', 'mscoco_label_map.pbtxt')
	#PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')

	NUM_CLASSES = 90
	
	if if_down:
		opener = urllib.request.URLopener()
		opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
		tar_file = tarfile.open(MODEL_FILE)
		for file in tar_file.getmembers():
			file_name = os.path.basename(file.name)
			if 'frozen_inference_graph.pb' in file_name:
        			tar_file.extract(file, os.getcwd())

以上代码设置object_detection所使用的模型,及下载解压相应的文件,这里设置了一个if_down的开关,第一次运行的时候可以打开此开关下载,以后可以关掉,因为下载的时间比较长,下载一次后面就无需再下载了。

self.detection_graph = tf.Graph()
	with self.detection_graph.as_default():
		od_graph_def = tf.GraphDef()
		with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
			serialized_graph = fid.read()
			od_graph_def.ParseFromString(serialized_graph)
			tf.import_graph_def(od_graph_def, name='')

	label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
	categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
	self.category_index = label_map_util.create_category_index(categories)

以上代码是tensorflow的初始化代码。

        # Subscribe to the registered depth image
        rospy.Subscriber(depth_image_topic, ROSImage, self.convert_depth_image)
        
        # Wait for the depth image to become available
        #rospy.wait_for_message('depth_image', ROSImage)
	
	self._sub = rospy.Subscriber(image_topic, ROSImage, self.callback, queue_size=1)	
	self._pub = rospy.Publisher('object_detection', ROSImage_C, queue_size=1)

以上代码,我们定义此节点订阅depth_image和image两个topic,同时发布一个名为object_detection的Compressed Imagetopic
depth_image的回调函数是convert_depth_image
image的回调函数是callback

    def convert_depth_image(self, ros_image):
        # Use cv_bridge() to convert the ROS image to OpenCV format
        # The depth image is a single-channel float32 image
        self.depth_image = self._cv_bridge.imgmsg_to_cv2(ros_image, "32FC1")

        # Convert the depth image to a Numpy array
        self.depth_array = np.array(self.depth_image, dtype=np.float32)
        #print(self.depth_array)

以上是depth_image处理的回调函数,首先将depth_image主题转换成opencv类型的,然后在将图片转换为numpy数组,赋值给depth_array成员变量

    def callback(self,image_msg):
	if self.vfc<12:
		self.vfc=self.vfc+1
	else:
		self.callbackfun(image_msg)
		self.vfc=0

以上是image处理的回调函数,这里控制视频帧的处理频率,主要是为了减少运算量,可以灵活调整,最终视频帧的处理是在callbackfun中处理的

    def box_depth(self,boxes,im_width, im_height):
	# Now compute the depth component
        depth=[]
	for row in boxes[0]:
		n_z = sum_z = mean_z = 0
		# Get the min/max x and y values from the ROI
		if row[0]<row[1]:
			min_x = row[0]*im_width
			max_x = row[1]*im_width
		else:
			min_x = row[1]*im_width
			max_x = row[0]*im_width
			
		if row[2]<row[3]:
			min_y = row[2]*im_height
			max_y = row[3]*im_height
		else:
			min_y = row[3]*im_height
			max_y = row[2]*im_height
		# Get the average depth value over the ROI
		for x in range(int(min_x), int(max_x)):
            		for y in range(int(min_y), int(max_y)):
                		try:
					z = self.depth_array[y, x]
				except:
					continue
                
				# Depth values can be NaN which should be ignored
				if isnan(z):
					z=6
					continue
				else:
					sum_z = sum_z + z
					n_z += 1 
			mean_z = sum_z / (n_z+0.01)
		depth.append(mean_z)
	return depth

以上代码是深度数据计算,输入参数boxes就是object_detection识别出来的物体的矩形标识rect,我们根据物体的矩形范围,匹配深度图片相应的区域,计算区域内的平均深度值作为此物体的深度数据。返回一个与boxes想对应的1维数组
由于深度图,和一般图片是异步处理,可能出现帧不对应的问题,在这里处理的比较简单,没有考虑此问题,可以通过缓存深度图片的方式来解决,通过时间戳来匹配最近的深度图片。

def callbackfun(self, image_msg):
	with self.detection_graph.as_default():
		with tf.Session(graph=self.detection_graph) as sess:
			 cv_image = self._cv_bridge.imgmsg_to_cv2(image_msg, "bgr8")
			 #cv_image = (self._cv_bridge.imgmsg_to_cv2(image_msg, "bgr8"))[300:450, 150:380]
			 pil_img = Image.fromarray(cv_image)			 
			 (im_width, im_height) = pil_img.size			 
			 # the array based representation of the image will be used later in order to prepare the
			 # result image with boxes and labels on it.
			 image_np =np.array(pil_img.getdata()).reshape((im_height, im_width, 3)).astype(np.uint8)
			 # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
			 image_np_expanded = np.expand_dims(image_np, axis=0)
			 image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
			 # Each box represents a part of the image where a particular object was detected.
			 boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
			 # Each score represent how level of confidence for each of the objects.
			 # Score is shown on the result image, together with the class label.
			 scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
			 classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
			 num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')
			
			 # Actual detection.
			 (boxes, scores, classes, num_detections) = sess.run(
			 	[boxes, scores, classes, num_detections],
			 	feed_dict={image_tensor: image_np_expanded})
			 box_depths=self.box_depth(boxes,im_width,im_height)
			 # Visualization of the results of a detection.
			 vis_util.visualize_boxes_and_labels_on_image_array(
			 	image_np,
			 	np.squeeze(boxes),
			 	np.squeeze(classes).astype(np.int32),
			 	np.squeeze(scores),
			 	self.category_index,
                                box_depths,
			 	use_normalized_coordinates=True,
			 	line_thickness=8)
			 
			 ros_compressed_image=self._cv_bridge.cv2_to_compressed_imgmsg(image_np)
			 self._pub.publish(ros_compressed_image)

以上代码是图片的回调函数,主要是将image消息转换为opencv格式,然后再转换成numpy数组,调用object_detection来识别图片中的物体,再调用 vis_util.visualize_boxes_and_labels_on_image_array将识别出来的物体在图片上标识出来,最后将处理后的图片发布为compressed_image类型的消息

现在我们只需要简单修改一下Object_detection/utils目录下的visualization_utils.py文件,就可以显示深度信息

def visualize_boxes_and_labels_on_image_array(image,
                                              boxes,
                                              classes,
                                              scores,
                                              category_index,
	                                      box_depths=None,
                                              instance_masks=None,
                                              keypoints=None,
                                              use_normalized_coordinates=False,
                                              max_boxes_to_draw=20,
                                              min_score_thresh=.5,
                                              agnostic_mode=False,
                                              line_thickness=4):

在visualize_boxes_and_labels_on_image_array定义中增加box_depths=None,缺省值为None

  for i in range(min(max_boxes_to_draw, boxes.shape[0])):
    if scores is None or scores[i] > min_score_thresh:
      box = tuple(boxes[i].tolist())
      if instance_masks is not None:
        box_to_instance_masks_map[box] = instance_masks[i]
      if keypoints is not None:
        box_to_keypoints_map[box].extend(keypoints[i])
      if scores is None:
        box_to_color_map[box] = 'black'
      else:
        if not agnostic_mode:
          if classes[i] in category_index.keys():
            class_name = category_index[classes[i]]['name']
          else:
            class_name = 'N/A'
          display_str = '{}: {}%'.format(
              class_name,
              int(100*scores[i]))
        else:
          display_str = 'score: {}%'.format(int(100 * scores[i]))
          
        #modify by diego robot
        if box_depths!=None:
        	display_str=display_str+"\ndepth: "+str(box_depths[i])
        	global depth_info
        	depth_info=True
        else:
        	global depth_info
        	depth_info=False
        #######################
        box_to_display_str_map[box].append(display_str)
        if agnostic_mode:
          box_to_color_map[box] = 'DarkOrange'
        else:
          box_to_color_map[box] = STANDARD_COLORS[
              classes[i] % len(STANDARD_COLORS)]

在第一个for循环里面,的box_to_display_str_map[box].append(display_str)一句前面增加如上diego robot修改部分代码

depth_info=False

在文件的开头部分定义全局变量,表示是否有深度信息传递进来

 # Reverse list and print from bottom to top.
  for display_str in display_str_list[::-1]: 
    text_width, text_height = font.getsize(display_str)    
    #modify by william
    global depth_info
    if depth_info:
  	text_height=text_height*2
    ###################
    	
    margin = np.ceil(0.05 * text_height)
    draw.rectangle(
        [(left, text_bottom - text_height - 2 * margin), (left + text_width,
                                                          text_bottom)],
        fill=color)
    draw.text(
        (left + margin, text_bottom - text_height - margin),
        display_str,
        fill='black',
        font=font)
    text_bottom -= text_height - 2 * margin

在draw_bounding_box_on_image函数的margin = np.ceil(0.05 * text_height)一句前面增加如上diego robot修改部分代码

3.launch文件

<launch>
   <node pkg="diego_tensorflow" name="ObjectDetectionDemo" type="ObjectDetectionDemo.py" output="screen">

       <param name="image_topic" value="/camera/rgb/image_raw" />

       <param name="depth_image_topic" value="/camera/depth_registered/image" />

       <param name="model_path" value="$(find diego_tensorflow)/scripts/object_detection" />

   </node>

</launch>

launch文件中定义了相应的参数,image_topic,depth_image_topic,model_path,读者可以根据自己的实际情况设定

4.启动节点

启动openni

roslaunch diego_vision openni_node.launch 

启动object_detection

roslaunch diego_tensorflow object_detection_demo.launch

5.通过手机APP订阅object_detection

我们只需要设置一个image_topic为object_detection,既可以在手机上看到物体识别的效果

object_detection物体识别是建立在训练模型上的,本文所采用的是官方所提供的训练好的模型,识别率还是比较高的。但是如果机器人所工作的环境与训练的图片差异比较大,识别率会大大降低。所以我们在实际使用中可以训练自己物体模型,在结合机器人的其他传感器和装置来达到实际应用的要求。比如可以应用在人员流量的监控,车辆的监控,或者特定物体的跟踪,再配合机械臂,深度相机,可以实现物体位置的判断,抓取等功能。

Lesson 25:Diego 1# 4WD —3:上位机通讯

ROS Arduino Bridge本质上是上位机通过串口发送控制命令来实现对Arduino的控制,所以我们要实现4驱的控制,我们也必须的修改通讯部分

1.Arduino firmware修改

1.1 command.h

在此文件中增加4驱所需的命令及宏定义

#ifndef COMMANDS_H
#define COMMANDS_H

#define ANALOG_READ    'a'
#define GET_BAUDRATE   'b'
#define PIN_MODE       'c'
#define DIGITAL_READ   'd'
#define READ_ENCODERS  'e'
#define MOTOR_SPEEDS   'm'
#define PING           'p'
#define RESET_ENCODERS 'r'
#define SERVO_WRITE    's'
#define SERVO_READ     't'
#define UPDATE_PID     'u'
#define DIGITAL_WRITE  'w'
#define ANALOG_WRITE   'x'
#define LEFT            0
#define RIGHT           1
#define LEFT_H          2 //新增
#define RIGHT_H         3 //新增
#define READ_PIDOUT    'f'
#define READ_PIDIN     'i'
#define READ_MPU6050   'g'

#endif

1.2 RosArduinoBridge-diego.ino

此文件是主程序,由于此文件代码比较多,故这里只介绍新增部分代码,完整的代码请见github

在runCommand()函数中修改在4驱模式下读取pidin 的代码

    case READ_PIDIN:
      Serial.print(readPidIn(LEFT));
      Serial.print(" ");
#ifdef L298P      
      Serial.println(readPidIn(RIGHT));
#endif
#ifdef L298P_4WD 
      Serial.print(readPidIn(RIGHT));
      Serial.print(" ");
      Serial.print(readPidIn(LEFT_H));
      Serial.print(" ");
      Serial.println(readPidIn(RIGHT_H));
#endif      
      break;

在runCommand()函数中修改在4驱模式下读取pidout 的代码

    case READ_PIDOUT:
      Serial.print(readPidOut(LEFT));
      Serial.print(" ");
#ifdef L298P      
      Serial.println(readPidOut(RIGHT));
#endif
#ifdef L298P_4WD 
      Serial.print(readPidOut(RIGHT));
      Serial.print(" ");
      Serial.print(readPidOut(LEFT_H));
      Serial.print(" ");
      Serial.println(readPidOut(RIGHT_H));
#endif       
      break;

在runCommand()函数中修改在4驱模式下读取编码器数据的代码

    case READ_ENCODERS:
      Serial.print(readEncoder(LEFT));
      Serial.print(" ");
#ifdef L298P 
      Serial.println(readEncoder(RIGHT));
#endif
#ifdef L298P_4WD
      Serial.print(readEncoder(RIGHT));
      Serial.print(" ");
      Serial.print(readEncoder(LEFT_H));
      Serial.print(" ");
      Serial.println(readEncoder(RIGHT_H));
#endif      
      break;

在runCommand()函数中修改在4驱模式下设置马达速度的代码

    case MOTOR_SPEEDS:
      /* Reset the auto stop timer */
      lastMotorCommand = millis();
      if (arg1 == 0 && arg2 == 0) {
#ifdef L298P        
        setMotorSpeeds(0, 0);
#endif

#ifdef L298P_4WD
        setMotorSpeeds(0, 0, 0, 0);
#endif        
        moving = 0;
      }
      else moving = 1;
      leftPID.TargetTicksPerFrame = arg1;
      rightPID.TargetTicksPerFrame = arg2;
#ifdef L298P_4WD      
      leftPID_h.TargetTicksPerFrame = arg1;
      rightPID_h.TargetTicksPerFrame = arg2;
#endif       
      Serial.println("OK");
      break;

在runCommand()函数中修改在4驱模式下更新PID参数的代码

    case UPDATE_PID:
      while ((str = strtok_r(p, ":", &p)) != '\0') {
        pid_args[i] = atoi(str);
        i++;
      }

      left_Kp = pid_args[0];
      left_Kd = pid_args[1];
      left_Ki = pid_args[2];
      left_Ko = pid_args[3];

      right_Kp = pid_args[4];
      right_Kd = pid_args[5];
      right_Ki = pid_args[6];
      right_Ko = pid_args[7];

#ifdef L298P_4WD

      left_h_Kp = pid_args[0];
      left_h_Kd = pid_args[1];
      left_h_Ki = pid_args[2];
      left_h_Ko = pid_args[3];

      right_h_Kp = pid_args[4];
      right_h_Kd = pid_args[5];
      right_h_Ki = pid_args[6];
      right_h_Ko = pid_args[7];
#endif
      
      Serial.println("OK");
      break;

在setup()函数中设置在4驱模式下新增的pin对应的寄存器的操作

void setup() {
  Serial.begin(BAUDRATE);
#ifdef USE_BASE
#ifdef ARDUINO_ENC_COUNTER
  //set as inputs
  DDRD &= ~(1 << LEFT_ENC_PIN_A);
  DDRD &= ~(1 << LEFT_ENC_PIN_B);
  DDRC &= ~(1 << RIGHT_ENC_PIN_A);
  DDRC &= ~(1 << RIGHT_ENC_PIN_B);

#ifdef L298P_4WD
  DDRD &= ~(1 << LEFT_H_ENC_PIN_A);
  DDRD &= ~(1 << LEFT_H_ENC_PIN_B);
  DDRC &= ~(1 << RIGHT_H_ENC_PIN_A);
  DDRC &= ~(1 << RIGHT_H_ENC_PIN_B);
#endif  

  //enable pull up resistors
  PORTD |= (1 << LEFT_ENC_PIN_A);
  PORTD |= (1 << LEFT_ENC_PIN_B);
  PORTC |= (1 << RIGHT_ENC_PIN_A);
  PORTC |= (1 << RIGHT_ENC_PIN_B);

#ifdef L298P_4WD
  PORTD &= ~(1 << LEFT_H_ENC_PIN_A);
  PORTD &= ~(1 << LEFT_H_ENC_PIN_B);
  PORTC &= ~(1 << RIGHT_H_ENC_PIN_A);
  PORTC &= ~(1 << RIGHT_H_ENC_PIN_B);
#endif    
  // tell pin change mask to listen to left encoder pins
  PCMSK2 |= (1 << LEFT_ENC_PIN_A) | (1 << LEFT_ENC_PIN_B);
  // tell pin change mask to listen to right encoder pins
  PCMSK1 |= (1 << RIGHT_ENC_PIN_A) | (1 << RIGHT_ENC_PIN_B);

#ifdef L298P_4WD
  // tell pin change mask to listen to left encoder pins
  PCMSK2 |= (1 << LEFT_ENC_PIN_A) | (1 << LEFT_ENC_PIN_B) | (1 << LEFT_H_ENC_PIN_A) | (1 << LEFT_H_ENC_PIN_B);
  // tell pin change mask to listen to right encoder pins
  PCMSK1 |= (1 << RIGHT_ENC_PIN_A) | (1 << RIGHT_ENC_PIN_B) | (1 << RIGHT_H_ENC_PIN_A) | (1 << RIGHT_H_ENC_PIN_B);
#endif
  // enable PCINT1 and PCINT2 interrupt in the general interrupt mask
  PCICR |= (1 << PCIE1) | (1 << PCIE2);
#endif
  initMotorController();
  resetPID();
#endif

  /* Attach servos if used */
#ifdef USE_SERVOS
  int i;
  for (i = 0; i < N_SERVOS; i++) {
    servosPos[i]=90;
  }
  servodriver.begin();
  servodriver.setPWMFreq(50);
#endif
}

在loop()函数中修改在4驱模式下自动停止的逻辑

  // Check to see if we have exceeded the auto-stop interval
  if ((millis() - lastMotorCommand) > AUTO_STOP_INTERVAL) {
    ;
#ifdef L298P    
    setMotorSpeeds(0, 0);
#endif
#ifdef L298P_4WD
    setMotorSpeeds(0, 0, 0, 0);
#endif
    moving = 0;
  }

#endif

2.上位机代码修改

上位机的代码修改主要是针对arduino_driver.py和base_controller.py的修改。

2.1 arduino_driver.py修改

修改def get_pidin(self):使其支持4个电机的pidin读取

    def get_pidin(self):
        values = self.execute_array('i')
        print("pidin_raw_data: "+str(values))
        if len(values) not in [2,4]:
            print "pidin was not 2 or 4 for 4wd"
            raise SerialException
            return None
        else:                                                                  
            return values

修改def get_pidout(self):使其支持4个电机的pidout读取

    def get_pidout(self):
        values = self.execute_array('f')
        print("pidout_raw_data: "+str(values))
        if len(values) not in [2,4]:
            print "pidout was not 2 or 4 for 4wd"
            raise SerialException
            return None
        else:                                                                  
            return values

修改def get_encoder_counts(self):使其支持4个电机的编码器数据读取

    def get_encoder_counts(self):
        values = self.execute_array('e')
        if len(values) not in [2,4]:
            print "Encoder count was not 2 or 4 for 4wd"
            raise SerialException
            return None
        else:

修改完后,将arduino的固件更新,就可以启动4驱底盘控制了,要注意的时候要同时打开上位机和arduino上的4驱开关,如果是使用2驱的代码,也要记得关掉4驱的开关。

3.启动4驱底盘控制

现在执行如下命令可以启动4驱底盘控制

roslaunch diego_nav diego_arduino_run.launch 

现在我们可以通过配套的ROS APP连接Diego#进行控制

首先创建一个新的ROS机器人连接,设置相应的ROS Master ip,和cmd_vel主题。

连接上机器人后,可以选择操作杆,和重力感应来对机器人操作

Lesson 24:Diego 1# 4驱底盘 —2:马达控制

针对马达控制我们主要需要修改两部分:

  • 驱动部分,使其可以支持同时控制4个马达
  • PID调速部分,试验表明,虽然是一样的型号的马达,但是给相同的PWM值,马达的转速也不一样,所以我们需要分别对4个马达进行PID调速

1.马达驱动

1.1. DualL298PMotorShield4WD.h文件的修改
增加了4个马达对应的PWM和方向控制的Pin引脚定义,和4个马达速度的设置函数

#ifndef DualL298PMotorShield4WD_h
#define DualL298PMotorShield4WD_h

#include 

class DualL298PMotorShield4WD
{
  public:  
    // CONSTRUCTORS
    DualL298PMotorShield4WD(); // Default pin selection.
    
    // PUBLIC METHODS
    void init(); // Initialize TIMER 1, set the PWM to 20kHZ. 
    void setM1Speed(int speed); // Set speed for M1.
    void setM2Speed(int speed); // Set speed for M2.
    void setM3Speed(int speed); // Set speed for M3.
    void setM4Speed(int speed); // Set speed for M4.
    void setSpeeds(int m1Speed, int m2Speed, int m3Speed, int m4Speed); // Set speed for both M1 and M2.
    
  private:
  
    // left motor
    static const unsigned char _M1DIR = 12;
    static const unsigned char _M2DIR = 7;
    static const unsigned char _M1PWM = 10;
    static const unsigned char _M2PWM = 6;
    
    // right motor
    static const unsigned char _M4DIR = 8;
    static const unsigned char _M3DIR = 13;
    static const unsigned char _M4PWM = 9;
    static const unsigned char _M3PWM = 11;
    
};

#endif

1.2. DualL298PMotorShield4WD.cpp文件的修改

主要是针对4个马达速度设置函数的实现,逻辑都是一样的,这里只截取一个马达的代码,代码的逻辑请看代码注释

// Set speed for motor 4, speed is a number betwenn -400 and 400
void DualL298PMotorShield4WD::setM4Speed(int speed)
{
  unsigned char reverse = 0;
  
  if (speed < 0) //速度是否小于0 { speed = -speed; // 如果小于0,则是反转 reverse = 1; // 反转标志变量设置为1 } if (speed > 255)  // 限定最大速度为255
    speed = 255;
  if (reverse)  //反转状态下
  {
    digitalWrite(_M4DIR,LOW);//设定马达转向的pin位低电平
    analogWrite(_M4PWM, speed);
  }
  else //正向转动状态下
  {
    digitalWrite(_M4DIR,HIGH);设定马达转向的pin位高电平
    analogWrite(_M4PWM, speed);
  }
}

在setSpeeds函数中分别调用4个马达的速度设置函数。

// Set speed for motor 1, 2, 3, 4
void DualL298PMotorShield4WD::setSpeeds(int m1Speed, int m2Speed, int m3Speed, int m4Speed)
{
  setM1Speed(m1Speed);
  setM2Speed(m2Speed);
  setM3Speed(m3Speed);
  setM4Speed(m4Speed);  
}

DualL298PMotorShield4WD.h和DualL298PMotorShield4WD.cpp修改完成后,在arduino 的library目录下新建一个名为dual-L298P-motor-shield-master-4wd的目录,将两个文件放到此文件夹下

现在我们打开android ide的library就可以看到我们刚才添加的库

1.1.3 motor_driver.h的修改

增加4驱马达控制的函数setMotorSpeeds,参数为4个马达的速度

void initMotorController();
void setMotorSpeed(int i, int spd);
#ifdef L298P
void setMotorSpeeds(int leftSpeed, int rightSpeed);
#endif
#ifdef L298P_4WD
void setMotorSpeeds(int leftSpeed_1, int leftSpeed_2, int rightSpeed_1, int rightSpeed_2);
#endif

1.1.4 motor_driver.ino的修改

#ifdef L298P_4WD
// A convenience function for setting both motor speeds
void setMotorSpeeds(int leftSpeed_1, int leftSpeed_2, int rightSpeed_1, int rightSpeed_2){
  setMotorSpeed(1, leftSpeed_1);
  setMotorSpeed(3, rightSpeed_1);
  setMotorSpeed(2, leftSpeed_2);
  setMotorSpeed(4, rightSpeed_2);
}
#endif
#else
#error A motor driver must be selected!
#endif

2.PID控制

PID控制都在diff_controller.h文件中定义修改

新增两个在4驱模式下的PID控制变量

#ifdef L298P_4WD
SetPointInfo leftPID_h, rightPID_h;
#endif

新增两个在4驱模式下的PID控制参数

#ifdef L298P_4WD

int left_h_Kp=Kp;
int left_h_Kd=Kd;
int left_h_Ki=Ki;
int left_h_Ko=Ko;

int right_h_Kp=Kp;
int right_h_Kd=Kd;
int right_h_Ki=Ki;
int right_h_Ko=Ko;

#endif

新增两个在4驱模式下的PID控制函数

#ifdef L298P_4WD
/* PID routine to compute the next motor commands */
void dorightPID_h(SetPointInfo * p) {
  long Perror;
  long output;
  int input;

  //Perror = p->TargetTicksPerFrame - (p->Encoder - p->PrevEnc);
  input =  p->Encoder-p->PrevEnc ;
  Perror = p->TargetTicksPerFrame - input;

  /*
  * Avoid derivative kick and allow tuning changes,
  * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-derivative-kick/
  * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/
  */
  //output = (Kp * Perror + Kd * (Perror - p->PrevErr) + Ki * p->Ierror) / Ko;
  // p->PrevErr = Perror;
  output = (right_h_Kp * Perror - right_h_Kd * (input - p->PrevInput) + p->ITerm) / right_h_Ko;
  p->PrevEnc = p->Encoder;

  output += p->output;
  // Accumulate Integral error *or* Limit output.
  // Stop accumulating when output saturates
  if (output >= MAX_PWM)
    output = MAX_PWM;
  else if (output <= -MAX_PWM) output = -MAX_PWM; else /* * allow turning changes, see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/ */ p->ITerm += left_h_Ki * Perror;

  p->output = output;
  p->PrevInput = input;
//  Serial.println("right output:");
//  Serial.println(p->output);
}
#endif
...
#ifdef L298P_4WD

/* PID routine to compute the next motor commands */
void doleftPID_h(SetPointInfo * p) {
  long Perror;
  long output;
  int input;

  //Perror = p->TargetTicksPerFrame - (p->Encoder - p->PrevEnc);
  input = p->Encoder-p->PrevEnc ;
  Perror = p->TargetTicksPerFrame - input;

  /*
  * Avoid derivative kick and allow tuning changes,
  * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-derivative-kick/
  * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/
  */
  //output = (Kp * Perror + Kd * (Perror - p->PrevErr) + Ki * p->Ierror) / Ko;
  // p->PrevErr = Perror;
  output = (left_h_Kp * Perror - left_h_Kd * (input - p->PrevInput) + p->ITerm) / left_h_Ko;
  p->PrevEnc = p->Encoder;

  output += p->output;
  // Accumulate Integral error *or* Limit output.
  // Stop accumulating when output saturates
  if (output >= MAX_PWM)
    output = MAX_PWM;
  else if (output <= -MAX_PWM) output = -MAX_PWM; else /* * allow turning changes, see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-tuning-changes/ */ p->ITerm += left_h_Ki * Perror;

  p->output = output;
  p->PrevInput = input;
//  Serial.println("left output:");
//  Serial.println(p->output);
}
#endif

修改readPIDIn函数,增加在4驱模式下,pidin的读取

long readPidIn(int i) {
  long pidin=0;
  if (i == LEFT){
    pidin = leftPID.PrevInput;
  }else if (i == RIGHT){
    pidin = rightPID.PrevInput;
  }
#ifdef L298P_4WD
  else if (i== RIGHT_H){
    pidin = rightPID_h.PrevInput;
  }else{
    pidin = leftPID_h.PrevInput;
  }
#endif  
  return pidin;
}

修改readPIDOut函数,增加在4驱模式下,pidOut的读取

long readPidOut(int i) {
  long pidout=0;
  if (i == LEFT){
    pidout = leftPID.output;
  }else if (i == RIGHT){
    pidout = rightPID.output;
  }
#ifdef L298P_4WD
  else if (i == RIGHT_H){
    pidout = rightPID_h.output;
  }else{
    pidout = leftPID_h.output;
  }
#endif   
  return pidout;
}

修改updatePID函数,增加在4驱模式下马达PID控制的调用

void updatePID() {
  /* Read the encoders */
  leftPID.Encoder =readEncoder(LEFT);
  rightPID.Encoder =readEncoder(RIGHT);
#ifdef L298P_4WD 
  leftPID_h.Encoder =readEncoder(LEFT_H);
  rightPID_h.Encoder =readEncoder(RIGHT_H);
#endif
  
  /* If we're not moving there is nothing more to do */
  if (!moving){
    /*
    * Reset PIDs once, to prevent startup spikes,
    * see http://brettbeauregard.com/blog/2011/04/improving-the-beginner%E2%80%99s-pid-initialization/
    * PrevInput is considered a good proxy to detect
    * whether reset has already happened
    */
#ifdef L298P    
    if (leftPID.PrevInput != 0 || rightPID.PrevInput != 0) resetPID();
#endif
#ifdef L298P_4WD
    if (leftPID.PrevInput != 0 || rightPID.PrevInput != 0 || leftPID_h.PrevInput != 0 || rightPID_h.PrevInput != 0) resetPID();
#endif    
    return;
  }

  /* Compute PID update for each motor */
  dorightPID(&rightPID);
  doleftPID(&leftPID);
#ifdef L298P_4WD
  dorightPID_h(&rightPID_h);
  doleftPID_h(&leftPID_h);
#endif  

  /* Set the motor speeds accordingly */
#ifdef L298P   
  setMotorSpeeds(leftPID.output, rightPID.output);
#endif

#ifdef L298P_4WD
  setMotorSpeeds(leftPID.output,leftPID_h.output, rightPID.output,rightPID_h.output);
#endif

}

至此4个马达已经可以独立驱动,独立的PID调速了,在下一篇教程中,会介绍如和上位机互动起来,让4驱底盘跑起来。

Lesson 23:Diego 1# 4驱底盘 —1:编码器

4驱底盘由于四个轮子可以独立控制,所以具有优秀的通过性,这篇文章介绍Diego 1#的四驱底盘,所有源代码都已经上传到github。

这里会分几篇文章来介绍4驱动版diego 1#的开发,这篇我们主要说明4驱动底盘4个马达编码器数据的读取。

1.1硬件说明

  • 底盘材质:铝合金材质
  • 轮胎:12cm 橡胶轮胎
  • 电机:370直流电机,带霍尔码盘测速,输出AB项编码信号

1.2 控制器

  • 主控制器 arduino UNO,使用uno分别对4个电机进行方向,PWM控制,并采用终端方式采集4个电机输出的编码信号
  • 电机控制器 两块L298p, 网上采购,控制引脚不同
  • 上位机:树莓派,或者mini pc

2.编码器数据读取

所以代码都基于diego 1# github上的代码进行修改,这里只说明4驱版本部分的代码

首先我们在ROSAduinoBridge_diego.ino文件中定义一个预编译符号,这个预编译符号可以启动或者关闭4驱的代码,这样方便我们开关4驱的功能

#define L298P_4WD

在diego 1 4wd中实现了使用arduino uno读取4个马达的AB项编码输出,一般的网上说明中arduino uno只有两个外部中断,好像只能读取一个马达的AB项编码输出,但事实上arudino中所有IO引脚都可以作为中断使用,通过操作中断寄存器的方式。

首先我们在encoder_driver.h文件中定义编码器连接的Arduino uno引脚:

#ifdef ARDUINO_ENC_COUNTER
  //below can be changed, but should be PORTD pins; 
  //otherwise additional changes in the code are required
  #define LEFT_ENC_PIN_A PD2  //pin 2
  #define LEFT_ENC_PIN_B PD3  //pin 3
  
  //below can be changed, but should be PORTC pins
  #define RIGHT_ENC_PIN_A PC2  //pin A2
  #define RIGHT_ENC_PIN_B PC3   //pin A3

#ifdef L298P_4WD
  #define LEFT_H_ENC_PIN_A PD4  //pin 4
  #define LEFT_H_ENC_PIN_B PD5  //pin 5
  
  //below can be changed, but should be PORTC pins
  #define RIGHT_H_ENC_PIN_A PC0  //pin A0
  #define RIGHT_H_ENC_PIN_B PC1   //pin A1
#endif  
#endif

从此文件中可以看到在原来2驱的基础上增加了4WD模式下的编码器连接引脚定义,其中左后方的电机AB项连接数字IO的D4和D5,而右后方电机AB项连接模拟IO的A0和A1,这样4个马达的编码器数据读取我们就用到8个IO口,加上PWM控制,转动方向的控制,在Diego 1# 4WD版本中,底盘控制一共用了16个IO,最后只剩D0,D1作为串口和上位机通讯,和A4,A5作为I2C的接口与I2C接口模块通讯,可以说Arduino UNO做到了充分利用。

在encoder_driver.ino文件中增加对4WD新增引脚的中断处理。

arduino uno中一共有3个引脚中断函数分别是

  • ISR (PCINT0_vect)对应 D8 to D13
  • ISR (PCINT1_vect) 对应 A0 to A5
  • ISR (PCINT2_vect) 对应 D0 to D7

diego1# 4wd中只需要用到两个中断处理函数 ISR(PCINT2_vect)和ISR(PCINT1_vect),代码如下:

  ISR (PCINT2_vect){
     static uint8_t enc_last=0;
#ifdef L298P_4WD        
     static uint8_t enc_last_h=0;
#endif          
     enc_last <<=2; //shift previous state two places
     enc_last |= (PIND & (3 << 2)) >> 2; //read the current state into lowest 2 bits

#ifdef L298P_4WD
     enc_last_h<<=2;
     enc_last_h |=(PIND & (3 << 4))>>4;
#endif 
  
     left_enc_pos += ENC_STATES[(enc_last & 0x0f)];
#ifdef L298P_4WD   
     left_h_enc_pos +=ENC_STATES[(enc_last_h & 0x0f)];
#endif    
  }
  
  /* Interrupt routine for RIGHT encoder, taking care of actual counting */
  ISR (PCINT1_vect){
     static uint8_t enc_last=0;
#ifdef L298P_4WD        
     //uint8_t pinct=PINC;
     static uint8_t enc_last_h=0;
#endif   	
     enc_last <<=2; //shift previous state two places
     enc_last |= (PINC & (3 << 2)) >> 2; //read the current state into lowest 2 bits

#ifdef L298P_4WD
     enc_last_h<<=2;
     enc_last_h |=(PINC & 3);
#endif  
     right_enc_pos += ENC_STATES[(enc_last & 0x0f)];
#ifdef L298P_4WD   
     right_h_enc_pos +=ENC_STATES[(enc_last_h & 0x0f)];
#endif
  }

这段代码中主要是针对中断寄存器PINC和PIND的操作,每个IO pin都对应PINC或者PIND的一位,IO有中断产生时,对应的位就会被置位,我们只需要读取相应为即可,如(PIND & (3 << 4))>>4读取的就是PIND的第4,5位,也就是D4,D5的数据。

Lesson 22:ar_track_alvar-AR标签跟随

ROS里面有一个非常好用的AR标签包,可以产生AR标签,识别AR标签。我们可以基于此功能实现很多好玩的AR应用,这篇文章中我们将介绍如何使用这个包,及基于此包我们实现AR标签的跟随。

1.安装ar_track_alvar

ar_track_alvar可以通过apt_get来安装,非常方便:

sudo apt-get install ros-kinetic-ar-track-alvar

安装完成后暨在/opt/ros/kinetic/share/目录下有ar_track_alvar的目录,而且在launch子目录下已经有了launch文件。

如果你ROS一开始就是安装的完整版本的话,这个包就已经安装好了,不需要单独安装。

2.生成ar tag标签

ar_track_alvar提供的ar标签的生成功能,我们首先进入存放标签文件的目录

cd ~/diego1/src/diego_ar_tags/data

接下来执行执行生成标签的命令,就会在此文件夹下生成对应的标签,一条命令只生成一个标签

rosrun ar_track_alvar createMarker 0
rosrun ar_track_alvar createMarker 1
rosrun ar_track_alvar createMarker 2
rosrun ar_track_alvar createMarker 3
rosrun ar_track_alvar createMarker 4

这时候我们查看目录就会看到生成的ar文件,是png格式的图片

双击就可以打开文件

可以把标签答应出来,在跟随的时候使用。

3.ar标签跟随

3.1 源代码

下面是ar_follower.py的源文件,是从rbx2的项目中提取出来的,可以直接使用

#!/usr/bin/env python

"""
    ar_follower.py - Version 1.0 2013-08-25
    
    Follow an AR tag published on the /ar_pose_marker topic.  The /ar_pose_marker topic
    is published by the ar_track_alvar package
    
    Created for the Pi Robot Project: http://www.pirobot.org
    Copyright (c) 2013 Patrick Goebel.  All rights reserved.

    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation; either version 2 of the License, or
    (at your option) any later version.
    
    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details at:
    
    http://www.gnu.org/licenses/gpl.html
"""

import rospy
from ar_track_alvar_msgs.msg import AlvarMarkers
from geometry_msgs.msg import Twist
from math import copysign

class ARFollower():
    def __init__(self):
        rospy.init_node("ar_follower")
                        
        # Set the shutdown function (stop the robot)
        rospy.on_shutdown(self.shutdown)
        
        # How often should we update the robot's motion?
        self.rate = rospy.get_param("~rate", 10)
        r = rospy.Rate(self.rate) 
        
        # The maximum rotation speed in radians per second
        self.max_angular_speed = rospy.get_param("~max_angular_speed", 2.0)
        
        # The minimum rotation speed in radians per second
        self.min_angular_speed = rospy.get_param("~min_angular_speed", 0.5)
        
        # The maximum distance a target can be from the robot for us to track
        self.max_x = rospy.get_param("~max_x", 20.0)
        
        # The goal distance (in meters) to keep between the robot and the marker
        self.goal_x = rospy.get_param("~goal_x", 0.6)
        
        # How far away from the goal distance (in meters) before the robot reacts
        self.x_threshold = rospy.get_param("~x_threshold", 0.05)
        
        # How far away from being centered (y displacement) on the AR marker
        # before the robot reacts (units are meters)
        self.y_threshold = rospy.get_param("~y_threshold", 0.05)
        
        # How much do we weight the goal distance (x) when making a movement
        self.x_scale = rospy.get_param("~x_scale", 0.5)

        # How much do we weight y-displacement when making a movement        
        self.y_scale = rospy.get_param("~y_scale", 1.0)
        
        # The max linear speed in meters per second
        self.max_linear_speed = rospy.get_param("~max_linear_speed", 0.3)
        
        # The minimum linear speed in meters per second
        self.min_linear_speed = rospy.get_param("~min_linear_speed", 0.1)

        # Publisher to control the robot's movement
        self.cmd_vel_pub = rospy.Publisher('cmd_vel', Twist, queue_size=5)
        
        # Intialize the movement command
        self.move_cmd = Twist()
        
        # Set flag to indicate when the AR marker is visible
        self.target_visible = False
        
        # Wait for the ar_pose_marker topic to become available
        rospy.loginfo("Waiting for ar_pose_marker topic...")
        rospy.wait_for_message('ar_pose_marker', AlvarMarkers)
        
        # Subscribe to the ar_pose_marker topic to get the image width and height
        rospy.Subscriber('ar_pose_marker', AlvarMarkers, self.set_cmd_vel)
        
        rospy.loginfo("Marker messages detected. Starting follower...")
        
        # Begin the cmd_vel publishing loop
        while not rospy.is_shutdown():
            # Send the Twist command to the robot
            self.cmd_vel_pub.publish(self.move_cmd)
            
            # Sleep for 1/self.rate seconds
            r.sleep()

    def set_cmd_vel(self, msg):
        # Pick off the first marker (in case there is more than one)
        try:
            marker = msg.markers[0]
            if not self.target_visible:
                rospy.loginfo("FOLLOWER is Tracking Target!")
            self.target_visible = True
        except:
            # If target is loar, stop the robot by slowing it incrementally
            self.move_cmd.linear.x /= 1.5
            self.move_cmd.angular.z /= 1.5
            
            if self.target_visible:
                rospy.loginfo("FOLLOWER LOST Target!")
            self.target_visible = False
            
            return
                
        # Get the displacement of the marker relative to the base
        target_offset_y = marker.pose.pose.position.y
        rospy.loginfo("target_offset_y"+str(target_offset_y))
        
        # Get the distance of the marker from the base
        target_offset_x = marker.pose.pose.position.x
        rospy.loginfo("target_offset_x"+str(target_offset_x))
        
        # Rotate the robot only if the displacement of the target exceeds the threshold
        if abs(target_offset_y) > self.y_threshold:
            # Set the rotation speed proportional to the displacement of the target
            speed = target_offset_y * self.y_scale
            self.move_cmd.angular.z = copysign(max(self.min_angular_speed,
                                        min(self.max_angular_speed, abs(speed))), speed)
        else:
            self.move_cmd.angular.z = 0.0
 
        # Now get the linear speed
        if abs(target_offset_x - self.goal_x) > self.x_threshold:
            speed = (target_offset_x - self.goal_x) * self.x_scale
            if speed < 0:
                speed *= 1.5
            self.move_cmd.linear.x = copysign(min(self.max_linear_speed, max(self.min_linear_speed, abs(speed))), speed)
        else:
            self.move_cmd.linear.x = 0.0

    def shutdown(self):
        rospy.loginfo("Stopping the robot...")
        self.cmd_vel_pub.publish(Twist())
        rospy.sleep(1)     

if __name__ == '__main__':
    try:
        ARFollower()
        rospy.spin()
    except rospy.ROSInterruptException:
        rospy.loginfo("AR follower node terminated.")

3.2代码解释

这段代码的原理其实非常简单,就是调用ar_track_alvar发布的/ar_pose_marker主题,然后将该主题的信息转换为cmd_vel主题的控制信息来控制机器人运动。我们先来看下/ar_pose_maker主题

上图执行rostopic echo /ar_pose_marker命令的截图,从中可以看出此topic中包含着一个Pose类型的消息,我们只需要处理position部分的x,y,z的值就可以了,其中x就是机器人到ar 标签的距离,y值就是相对于与摄像头中心位置的偏移,理解了ar_pose_marker我们就很容易理解代码的原理,这里就不多做叙述了。

3.3 launch文件

ar_large_markers_xition.launch文件启动ar_track_alvar node

<launch>
     <arg name="marker_size" default="12.5" /> 
     <arg name="max_new_marker_error" default="0.08" />
     <arg name="max_track_error" default="0.4" />
     <arg name="cam_image_topic" default="/camera/depth_registered/points" />
     <arg name="cam_info_topic" default="/camera/rgb/camera_info" /> 
     <arg name="output_frame" default="/base_link" />
     <arg name="debug" default="false" />
     <arg if="$(arg debug)" name="launch_prefix" value="xterm -e gdb --args" />
     <arg unless="$(arg debug)" name="launch_prefix" value="" />
     <node pkg="tf" type="static_transform_publisher" name="base_link_2_camera_link" args="0.0 0.0 0.2 0 0 0 /base_link /camera_link 40"/> 
     <node name="ar_track_alvar" pkg="ar_track_alvar" type="individualMarkers" respawn="false" output="screen" args="$(arg marker_size) $(arg max_new_marker_error) $(arg max_track_error) $(arg cam_image_topic) $(arg cam_info_topic) $(arg output_frame)" launch-prefix="$(arg launch_prefix)" /> <node name="ar_track_alvar" pkg="ar_track_alvar" type="individualMarkers" respawn="false" output="screen" args="$(arg marker_size) $(arg max_new_marker_error) $(arg max_track_error) $(arg cam_image_topic) $(arg cam_info_topic) $(arg output_frame)" launch-prefix="$(arg launch_prefix)" />
</launch>

diego_ar_follower.launch 文件启动 diego_ar_follower node,跟随的参数可以在这个文件中设置。

<launch>

     <node name="arduino" pkg="ros_arduino_python" type="arduino_node.py" output="screen">
         <rosparam file="$(find ros_arduino_python)/config/my_arduino_params.yaml" command="load" />
     </node>

     <node pkg="diego_ar_tags" name="ar_follower" type="ar_follower.py" clear_params="true" output="screen">
         <rosparam>
             rate: 10
             max_x: 20.0
             goal_x: 0.7
             x_threshold: 0.1
             y_threshold: 0.05
             y_scale: 2.0
             x_scale: 1.0
             max_angular_speed: 1.0
             min_angular_speed: 0.2
             max_linear_speed: 0.2
             min_linear_speed: 0.05
         </rosparam>

    </node>
</launch>

4.启动节点

启动摄像头,在diego中使用xtion深度相机,所以在这里我们首先启动openni

roslaunch diego_vision openni_node.launch

启动ar_track_alvar节点

roslaunch diego_ar_tags ar_large_markers_xition.launch

启动diego_ar_follower节点

roslaunch diego_ar_tags diego_ar_follower.launch

现在只需要拿着我们事先打印好的ar tag在Xtion相机前移动,机器人就会跟着你走。

 

 

Posts navigation

1 2 3 4 5
Scroll to top