Introduction

TurtleBot3 is a small, affordable, programmable, open-source robot platform for education, research, hobby, and product development. It is designed for use with the ROS2 and can be programmed with Python, C++, and more. The robot is equipped with camera, lidar and differential drive motors making it suitable for a wide range of applications, such as mapping, localization, and autonomous navigation.


Goals

The project was done as a final project for one of the class: Intro to Robotics Research 7785.

The robot has no other global info other than these local signs. A sample path is shown here.

The goal of the project is to navigate a maze using signs on a wall.

Work Done

The project required training a vision model that can classify different signs accurately and reliably. Moreover the model should be such that it could be run locally for quick detection as sending lots of frames over the network causes lag and was not a feasible solution.

We initially decided to train a tensor-flow model and then convert it to tf-lite to run smoothly on the robot. The model used convolution and pooling layers to extract important features from the image and lastly it used two linear dense layers to classify it into one of the 6 output signs.

Due to it not meeting our expectations (Check results down below), we then decided to train a K - Nearest Neighbour model (KNN) to classify the images. This required detecting the boundary of the signs, cropping the rectangle as per that boundary, using dilate and erode to remove noise and extract features, get a high contrast sign out of the image and lastly map it into a KNN model.

Once the image classification model was ready, we had to make a basic two level control architecture for the robot. This included one high level controller that was responsible for taking decisions related to if any sign has been detected and turning as per the sign and then moving. The second low level controller was a PID controller that made the robot go straight in a line till it detected an obstacle with its lidar sensor and stop at a certain distance from the wall.

For this two different nodes were made in ROS2. One was constantly publishing the image classification value. The other node was implementing a state machine that transformed the robot from one state to another. States like pause, turn right, go straight, etc.

The high level controller had to deal with a variety of edge cases like detecting a wrong sign to not finding any sign. Various approaches were used to solve these. One of my personal favorite was the age old technique people used to solve mazes: when you can't see and don't know where to move, just move to the nearest wall and follow it until you find some new info. This seemed to correct the error in lots of cases.

Results

The tensor-flow model worked good for the training and validation data. But it gave poor results on the test dataset. Moreover, we found that implementing tf-lite library on the robot was going to be a bit tedious. So we moved to the KNN.

The KNN managed to achieve an accuracy over 95% and worked pretty well for all of our test cases. Implementing on the robot was easy as the robot just had to perform simple image transformation computations on board. The robot managed to perform pretty well and managed to reach the goal in almost all cases except for some rare ones where it mis-detected the end goal sign.

Contact me!

๐Ÿ“ž(404) 388-3944    ๐Ÿ“Atlanta, GA, 30309   ๐Ÿ“งbhushan.pawaskar@hotmail.com

ยฉ Copyright 2023 Bhushan Pawaskar - All Rights Reserved