Difference between revisions of "Team Commando"

From GcatWiki
Jump to: navigation, search
(Image Processing)
(To Do)
Line 1: Line 1:
==To Do==
 
*command module navigation (Daniel and Matt)
 
*image processing (Steph, Evan, Jonah)
 
*decide on a camera
 
*abstract
 
**https://docs.google.com/document/d/1YrS9Ua821oJdYNquYGewYql7AsK0MqUhzOm-0q8JxM8/edit
 
*poster
 
**[[File:TeamCommando.ppt]]
 
 
 
==Navigation==
 
==Navigation==
  

Revision as of 03:59, 7 May 2012

Navigation

Image Processing

  • Camera
    • Logitech Orbit (on a stick)
  • Method
    • attach laptop to iRobot
    • receive signal from robot when it has arrived in room via command module serial ports => matlab
    • webcam pans and roborealm scans for magenta
      • matlab checks to see if roborealm has found magenta at every degree of the pan
        • if found, places a one in the vector entry corresponding to that room
  • RoboRealm
    • Color to detect: magenta?
    • attach color to bottom of window so can only be detected when open
  • What We Have So Far:
    • RoboRealm module string to find magenta blobs and control Orbit
    • MatLab script that opens RoboRealm and controls camera/gets magenta detection info


  • To-Do:
    • Get magenta tape
    • test Orbit in different light conditions
    • look into more filter options
      • sunglasses seemed like an option
      • Rob McSwain has been emailed about video filters
    • Figure out exact frequency of magenta and how that corresponds to hue/intensity in RoboRealm so that extraneous magenta objects are not picked up
    • Add something to MatLab script to interpret the vector once the Roomba has been everywhere
    • Test/figure out the serial port pings to MatLab


  • Active Thoughts:
    • How should the camera move? currently pans 180; maybe depends on how the navigation people do room entering.