Difference between revisions of "Team Commando Tutorial/Explanation of Methods"

From GcatWiki
Jump to: navigation, search
(Detecting Windows)
(Detecting Windows)
Line 52: Line 52:
 
**[[Media:RoboRealmControl.m.txt]]
 
**[[Media:RoboRealmControl.m.txt]]
 
*RoboRealm filters:
 
*RoboRealm filters:
**for at night with lights off and flashlight [[Media:NightFilterReflect2robo.txt]]
+
**Motion Detection: [[Media: Motion Detecion.robo.txt]]
 +
**For at night with lights off and flashlight [[Media:NightFilterReflect2robo.txt]]
  
 
*Sticker for windows
 
*Sticker for windows

Revision as of 22:35, 16 May 2012

Requirements

  • iRobot Create
  • Command Module
  • Virtual walls
  • PC with Windows OS
  • RoboRealm
  • MATLAB
  • webcam
  • flashlight
  • reflective tape

Guide

Overview

Team Commando is so named due to their use of the onboard Command Module to control the robot. As opposed to the wireless communication used by Team R2D2, all of Team Commando's computation was done on the physical body of the robot itself.

Generally, the implementation works like this:

The robot is equipped with both the command module and an onboard laptop/webcam combo. The two are actually two independent systems. The command module controls navigation, as described in the following section. While navigating the hallways, the computer vision system knows not to check for windows because it is running a motion detection filter; once it detects little to no motion, it realizes that the robot has stopped within a room and beings to run its detection script, which is covered in more detail below.

Navigation

Our general navigation algorithm is as follows:

  • Find checkpoint
  • Move along the wall a preset distance
  • Now the robot is at a door. Face the door.
  • Drive at maximum speed to clear threshold. Once threshold is clear, resume normal speed
  • If door is open, then enter room, wait until image processing is done, turn around and leave
  • Find the next checkpoint... etc.

Checkpoints can be a virtual wall, or the real wall immediately next to the room the robot has just left. The first checkpoint is just the robot's starting point. The robot follows the wall whenever it is measuring distance because the wall keeps the robot going straight ahead, which helps precision.

Detecting Windows

  • Method
    • The laptop and webcam are affixed to the robot frame via velcro and rubber band.
    • A flashlight is affixed tot he webcam via ducttape (purple). This must be manual turned on.
    • The matlab script is initialized from the laptop. This script both opens and controls RoboRealm.
    • Initially, a motion detection script is loaded. This lets the robot know while it is moving, so it does not check for windows and potential generate false positives.
    • Once the robot enters a room, it stops. Currently, it waits for 50 seconds, the time it takes for the script to recognize it has stopped moving, load the detection module, and pan the webcam/flashlight 180 degrees.
    • The RoboRealm filter loaded upon entering a room is set to find only object above a certain intensity (brightness). In ideal conditions, only the reflective tape will be recognized by this filter.
    • When tape is recognized, the "Center of Gravity" module automatically targets it. If there is no tape, this module should target nothing.
    • The MatLab script stops panning every half turn (approximately) and checks to see if the COG variable exists. If it does, MatLab records this as a one in the corresponding vector entry, signifying an open window.
    • Once the pan and check finishes, the motion detections cript is reloaded and the robot begins moving again. This is why the timing between the two must be coordinated.
    • This process repeats for every room on the floor.


  • Sticker for windows
    • Bottom of sticker should be reflective
    • Reflective part is only visible when window open
    • Sticker mockup:

Communicating between navigation and detection

  • created motion sensing filter
  • MATLAB starts window detection filter when motion sensor determines that robot has stopped moving

Outstanding Issues/Ideas for Further Work

    • Establishing actual communication between the laptop and command module would be more efficient. It is unclear whther or not serial port communication is possible.
    • With the current motion detection script, the robot has no way of knowing it has skipped a room in the event that the door is closed.
    • If lights are on in a room, the robot will likely detect a window. It is also prone to accidentally picking up reflections from the metal on desks.
    • The webcam is in a rather precarious position on top of the robot, especially considering how hard the robot rams closed doors.