For the past 2 months with a  few colleagues from Netcetera, we have been working on something called RoboMap as an after-hours project. The idea was to create an autonomous robot that can get inside any room and create a map of the place.
Initially, we started with setup of WowWee Rovio +beagleboard+ Kinect on top of it. Great thing about this was that the programming side for controlling the robot  was extremely easy.  The Rovio has embedded HTTP server that accepts commands via HTTP requests and the control of the robot was done by simply sending and receiving data over HTTP.
The beagleboard  is running Ubuntu for ARM and after some time we realized that we needed more computing power so beagleboard currently works just as the proxy and  we send the Kinect data over WiFi.

On the desktop/laptop side, we read this data directly from the socket and we have written a Java program that reads the  Kinect data and accordingly controls the robot.  The biggest challenge at this point was stitching the images that we get from Kinect. In order to do this, we needed to find the location of the robot in 3D. Now we did get some data from the Rovio about the location but this was very inaccurate.  On the other hand, the Kinect has the built-in accelerometer, so we tried to combine that  data but apparently this was not such a good idea. After some digging around we decided that the best way to do this is using computer vision.  Without getting into details, we used a combination of nestk + OpenCV + PCL. I personally have used OpenCV in the past, but Point Clouds are also something very awesome when working with 3D data.
So we get the data from Kinect using lib Freenet, this is a story just by  itself but  I hope that I'll get the time to explain this part in some other post.
By using this library, we were able to get information about how to stitch every new scene onto the full map.
After working some time on this we decided to go with a different robotic base, so we switched to LynxMotion A4WD1 v2 Robot. This was done so that the robot would become more stable also the coolness factor did not hurt.
We got the version of the robot that had just the frame and the motors, so in order to control them we used Arduino. We first want to remove most of the bugs that can happen and after that we plan to release the source code of the entire project. Let see the video and just to remind you that this is the work in progress.

Published on Oct 27, 2011 by Mite Mitreski
Updated on 10/27/2011 03:33:00 AM