Difference between revisions of "Team R2D2"

From GcatWiki
Jump to: navigation, search
(Install Open CV on Mac)
(Install Open CV on Mac)
Line 20: Line 20:
 
===Install Open CV on Mac===
 
===Install Open CV on Mac===
 
#If you don't use enthought python follow steps [http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port here]
 
#If you don't use enthought python follow steps [http://opencv.willowgarage.com/wiki/Mac_OS_X_OpenCV_Port here]
#Otherwise, download latest release for mac
+
#Otherwise, download latest open cv release for mac
 +
#Install ffmeg using [http://jungels.net/articles/ffmpeg-howto.html this tutorial]
 
#PYTHON_LIBRARIES=/Library/Frameworks/EPD64.framework/Versions/7.2/Frameworks PYTHON_EXECUTABLE=/Library/Frameworks/EPD64.framework/Versions/Current/bin/python cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON <path to the OpenCV source directory>
 
#PYTHON_LIBRARIES=/Library/Frameworks/EPD64.framework/Versions/7.2/Frameworks PYTHON_EXECUTABLE=/Library/Frameworks/EPD64.framework/Versions/Current/bin/python cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON <path to the OpenCV source directory>
  

Revision as of 03:03, 2 May 2012

To Do

  • Connect to iRobot & Send Command with MatLab (with laptop) - Cyrus
  • Decide on Camera - Leland & Corey
  • Subgroup Make Outline

Abstract Draft

Imaging

Summary

  • Camera will be attached and set - to move the camera left or right we will move the robot
    • We will have to decide if we want to create a contraption to raise/lower the camera
  • We need to get a camera that we can also connect to via bluetooth... at first this could be a wired, but we need direct usb connection in final version
  • Good Link

How to connect to IP camera

  1. Connect to davidson device using WPA personal with the passphrase
  2. Go to the address http://10.40.181.49/
  3. Log in using our id and password

Install Open CV on Mac

  1. If you don't use enthought python follow steps here
  2. Otherwise, download latest open cv release for mac
  3. Install ffmeg using this tutorial
  4. PYTHON_LIBRARIES=/Library/Frameworks/EPD64.framework/Versions/7.2/Frameworks PYTHON_EXECUTABLE=/Library/Frameworks/EPD64.framework/Versions/Current/bin/python cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_PYTHON_SUPPORT=ON <path to the OpenCV source directory>

Matlab Imaging

OpenCV

  • Has some nice functions that could be used to detect opjects
  • OpenCV (Open Source Computer Vision) is a library of programming functions for real time computer vision.
    • "applications of the OpenCV library are Human-Computer Interaction (HCI); Object Identification, Segmentation and Recognition; Face Recognition; Gesture Recognition; Motion Tracking, Ego Motion, Motion Understanding; Structure From Motion (SFM); Stereo and Multi-Camera Calibration and Depth Computation; Mobile Robotics."
  • OpenCV Documentation
  • List of compatible cameras

Below is python code taken from FLAIL v2

##################
# Camera Control #
##################
class FlailCam:
  """
  Controls a USB webcam. Uses OpenCV to control the camera, and should
  therefore work with any V4L2-compatible camera (in linux) or just about
  anything (in Mac OS). Webcam support being what it is in Linux, though, it
  may not work at all.
 
  If you find yourself in the it-doesn't-even-begin-to-work boat, take a look
  at flailcam.py from FLAIL 1.0. It has a (terrible, kludgy, slow) workaround
  that sometimes worked better.
 
  The important method (get_image) returns a Python Imaging Library (or PIL)
  object; these are well-documented on the PIL website, at
  <http://www.pythonware.com/library/pil/handbook/image.htm>. You're probably
  most interested in the getpixel, putpixel, and save methods. 
 
  FLAIL 1.0 had a FlailImage class, which wrapped the same PIL image class
  used here; if you'd like an extremely simplified interface, take a look at
  that code.
   
  If you'd prefer an OpenCV object, just take a look at the get_image source
  code, and you'll see what to do.
  """
  def __init__(self, index = 0):
    """
    Connect to a camera. OpenCV (which we use for camera control) numbers
    these starting from 0; the index parameter tells which camera to use. In
    Linux, an index of X implies the device /dev/videoX.
    """
    self.cap = opencv.highgui.cvCreateCameraCapture(index)
 
  def get_image(self):
    """
    Take a picture. Return it as a PIL image. 
    """
    return opencv.adaptors.Ipl2PIL(opencv.highgui.cvQueryFrame(self.cap))

Issues

  • MATLAB approach:
    • http://www.mathworks.com/matlabcentral/newsreader/view_thread/154601
    • Pros: it's been done before (though maybe not without the acquisition toolbox, but I think it's possible). It looks very straightforward.
    • Cons: Acquisition toolbox is a lot of money. Couldn't find anything about using newer versions of MATLAB to do this. Have to use low res webcam.

Possible Cameras

SmartPhone Camera Vision

Navigation

Notes

Jack and Duke met on 4/20 to discuss navigation. The consensus was that our first goal is to figure out how to design a virtual map of a space and write commands to get the robot to go to a target location in the space. Once we do that, we will work on mapping out chambers, hopefully using the robot to gather data of the shape of our chambers floor (which one are we going to do?) and turn that data into a map. We also discussed the possibility of using traveling salesman problem model to get the robot to our target locations as quickly as possible.