Day 186 – Plans for Autonomy and ROS

Hello!

I am Barney, I’ll be taking over autonomy for the next two years for S.A.R.T. This post isn’t actually about the progress of autonomy, but rather my plans for it.

The main goal for the Mk 4, is to be able to follow the left wall throughout the course. Though this will be inefficient for getting to the goal, it’s going to be the main component of the autonomy program. The left wall following program will use the four distance sensors located on the robot to determine its relative position on that section of the course (i.e. intersection, dead end, straight lane, etc). While I understand that this may not work well on certain terrain parts of the course, I will attempt to work around those issues to provide a more steady transition (utilizing concepts such and PID Control).

Next in line will be integrating the Robot Operating System (ROS) into the Mk 4. This will allow us to utilize several functions and algorithms that ROS provides, thus eliminating issues we have had in the past. This will also make it easier for a third-party to take our source code and use it on their machine.

I will attempt to integrate these functions after the main goal is completed:

  • Basic top-down mapping (a basic map with only the use of the accelerometer and gyroscope)
  • Integration with SLAM (I will be working with the team member responsible for SLAM)

For the top-down mapping, I will be using a gyroscope and accelerometer as a basic method to track the position of the robot. To better understand the concept, just imagine dragging your mouse through a maze in paint.net while using the pencil function. This will allow us to have a basic map of the area if all else fails. Somewhat similar to the image below, but rather in a more organised fashion (Keeping within the boundaries of the course)

Image result for accelerometer map robot
Source: https://arxiv.org/pdf/1509.02154.pdf

Regarding integration with SLAM, I will be working with fellow teammate Michael to utilize SLAM for autonomy purposes. As this will be computationally intensive, the effort will likely be offloaded to the control unit. This will allow us to create a complex 3D map of the area and later use it for autonomous navigational purposes. Somewhat similar to the image below:

Image result for SLAM 3D and AI
Source: https://www.spar3d.com/news/uav-uas/slam-ai-autonomous-3d-mapping-drone/

If you have any questions, feel free to comment below.

 

Regards,

Barney Bruckner
S.A.R.T Autonomy

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.