Team Commando Tutorial/Explanation of Methods

From GcatWiki
Revision as of 22:52, 16 May 2012 by Jogaleotasprung (talk | contribs) (Detecting Windows)
Jump to: navigation, search

Requirements

  • iRobot Create
  • Command Module
  • Virtual walls
  • PC with Windows OS
  • RoboRealm
  • MATLAB
  • webcam
  • flashlight
  • reflective tape

Things You Need to Install

  • Proprietary Logitech webcam software (included on CD)
    • Kind of annoying, but probably won't work without it.

Guide

Overview

Team Commando is so named due to their use of the onboard Command Module to control the robot. As opposed to the wireless communication used by Team R2D2, all of Team Commando's computation was done on the physical body of the robot itself.

Generally, the implementation works like this:

The robot is equipped with both the command module and an onboard laptop/webcam combo. The two are actually two independent systems. The command module controls navigation, as described in the following section. While navigating the hallways, the computer vision system knows not to check for windows because it is running a motion detection filter; once it detects little to no motion, it realizes that the robot has stopped within a room and beings to run its detection script, which is covered in more detail below.

Navigation

Our general navigation algorithm is as follows:

  • Find checkpoint
  • Move along the wall a preset distance
  • Now the robot is at a door. Face the door.
  • Drive at maximum speed to clear threshold. Once threshold is clear, resume normal speed
  • If door is open, then enter room, wait until image processing is done, turn around and leave
  • Find the next checkpoint... etc.

Checkpoints can be a virtual wall, or the real wall immediately next to the room the robot has just left. The first checkpoint is just the robot's starting point. The robot follows the wall whenever it is measuring distance because the wall keeps the robot going straight ahead, which helps precision.

Detecting Windows

  • Method
    • The laptop and webcam are affixed to the robot frame via velcro and rubber band.
    • A flashlight is affixed tot he webcam via ducttape (purple). This must be manual turned on.
    • The MatLab script is initialized from the laptop. This script both opens and controls RoboRealm.
      • The API settings in RoboRealm must be turned ON to interface with MatLab. You shouldn't have to alter the port numbers or anything.
    • Initially, a motion detection script is loaded. This lets the robot know while it is moving, so it does not check for windows and potential generate false positives.
    • Once the robot enters a room, it stops. Currently, it waits for 50 seconds, the time it takes for the script to recognize it has stopped moving, load the detection module, and pan the webcam/flashlight 180 degrees.
    • The RoboRealm filter loaded upon entering a room is set to find only object above a certain intensity (brightness). In ideal conditions, only the reflective tape will be recognized by this filter.
    • When tape is recognized, the "Center of Gravity" module automatically targets it. If there is no tape, this module should target nothing.
    • The MatLab script stops panning every half turn (approximately) and checks to see if the COG variable exists. If it does, MatLab records this as a one in the corresponding vector entry, signifying an open window.
    • Once the pan and check finishes, the motion detections cript is reloaded and the robot begins moving again. This is why the timing between the two must be coordinated.
    • This process repeats for every room on the floor. Once finished, the emails script runs, reading through the vector and noting the corresponding rooms with open windows. Currently, it is set to email Dr. Heyer. You probably want to change this, at least during testing.
  • Technical Notes
    • The interface between MatLab and RoboRealm is very simple. The variable sets and gets follow the same form as the ones in our script; all you should have to do is change the variable names.
    • The VBScript variables in RoboRealm are a little wonky. Sometimes they decide not to appear. I'm not sure why; you just have to fiddle with them a bit until they do. It is alsways good to check the variables in MatLab to make sure it is actually getting the data you think it is.
    • There are apparently RoboRealm APIs for Java, Python, etc., if you don't want to use MatLab. Likely they are just as easy to get running.

Actual Code

  • Sticker for windows
    • Bottom of sticker should be reflective
    • Reflective part is only visible when window open
    • Sticker mockup:

Outstanding Issues/Ideas for Further Work

    • Establishing actual communication between the laptop and command module would be more efficient. It is unclear whether or not serial port communication is possible.
    • If communication between the two can be established, the camera could in theory help with navigation/checkpoint finding (similar to team R2D2's method).
    • With the current motion detection script, the robot has no way of knowing it has skipped a room in the event that the door is closed.
    • If lights are on in a room, the robot will likely detect a window. It is also prone to accidentally picking up reflections from the metal on desks.
    • The webcam is in a rather precarious position on top of the robot, especially considering how hard the robot rams closed doors.
    • When headed directly at a point with no fixed reference, e.g. a white wall, the motion detection does not detect any change between frames and so begins running the scanning script. The flashlight somewhat mitigates this problem, but not entirely.
    • The robot is very slow.
    • She also has a hard time dealing with the small ledge between the hallway/room. Some rearrangement of the laptop mounting might solve this issue; it is currently unbalanced.
    • Of course, there are many other detection methods which could be implemented. Have at it!