iCub lego demo

Installation

Here are the steps taken to put the iCub to grab Lego pieces and hopefully put them together.

Installing and setting up ROS in debian sid

Take a look in the official documentation or if you are using debian sid. Here we include a fast step-by-step guide of how to do it with debian sid.

  • Install python-setuptools, build-essential, python-yaml, subversion, cmake, wget, cmake-curses-gui, lsb-release, libboost-all-dev, liblog4cxx10-dev package.
  • Install rosinstall:
sudo easy_install -U rosinstall
  • Create a directory where you want to store ros:
mkdir -p local/src/ros
cd local/src/
  • As of 10.08.2010 is better (more stable, sufficiently new) to use cturtle ros distribution.
rosinstall ros http://www.ros.org/rosinstalls/cturtle_base.rosinstall
  • In the last step rosinstall will try to download and compile cturtle. If you get some errors is because you may be missing some dependencies. Look at the error messages and find out the name of the packages that you have to install in debian, then try rosintall command again until you don't get any errors.
  • Create a file called .bashrc.ros Put inside the following:
export ROS_ROOT=${HOME}/local/src/ros/ros ; 
export PATH=${ROS_ROOT}/bin:${PATH} ; 
export PYTHONPATH=${ROS_ROOT}/core/roslib/src:${PYTHONPATH} ; 
export OCTAVE_PATH=${ROS_UP}/ros/core/experimental/rosoct/octave:${OCTAVE_PATH} ; 
#if [ ! "$ROS_MASTER_URI" ] ; then export ROS_MASTER_URI=http://leela:11311 ; fi ; 
if [ ! "$ROS_MASTER_URI" ] ; then export ROS_MASTER_URI=http://localhost:11311 ; fi ; 
export ROS_PACKAGE_PATH=${HOME}/local/src/ros/stacks:${HOME}/local/src/repositories/oid5;
#export ROS_STACK_PATH=${ROS_ROOT}:${ROS_UP}/ros-pkg ; 

#source `rosstack find ias_semantic_mapping`/setup.sh

NUM_CPUS=`cat /proc/cpuinfo |grep processor |wc -l`
let "PAR_JOBS=${NUM_CPUS}+2"
export ROS_PARALLEL_JOBS="-j${PAR_JOBS}"

export ROS_LANG_DISABLE=roslisp:rosjava

export ROS_IP=`ip addr show \`/sbin/route -n | awk '/UG/ {print $8}'\`| awk '/inet /{print $2}' |sed -e 's/\/.*//'`
#export ROS_IP=192.168.137.2
#HOSTNAME=`hostname`
#if [[ "${HOSTNAME}" == "leela" ]] ; then echo "Hola"; else echo "lala" ;fi

alias kcart_left="`rospack find kbd-cart-cmd`/kbd-cart-cmd -p kbd-left -c /lwr/left"
alias kcart_right="`rospack find kbd-cart-cmd`/kbd-cart-cmd -p kbd-right -c /lwr/right"

. ${ROS_ROOT}/tools/rosbash/rosbash 
  • Adjust the ROS_MASTER_URI to point to the computer where the rosmaster that you want to use is.
  • Inside of your .bashrc add de following line:
alias env_ros='source ${HOME}/.bashrc.ros'
  • Logout and login again to reload your .bashrc
  • Run env_ros
  • Now you can use the ros utilities

Installing extra ros packages

In the ros directory there is a subdirectory called stacks this is where you can put extra packages. You just have to download somehow this packages and put them there.

Example: Gstreamer video adquisition:

cd $ROS_ROOT
cd ../stacks
  • Download the code there:
svn co http://brown-ros-pkg.googlecode.com/svn/tags/brown-ros-pkg
  • As of August 2010, gscam needs to be patch to export the frame id to be able to use tf and rviz to visualize anything with respect to gscam. The patch works for revision number 822 of the brown repository. Download the patch and patch it:
roscd
cd ../stacks/brown-ros-pkg/
cat /directory_to_patch/gscam.patch | patch -p0
  • Now you can compile the code.
rosmake gscam
  • Rosmake deals with ros dependencies. It will automatically compile any other necessary ros packages that are dependencies.
  • Rosmake is pretty slow checking dependencies, so if you already compile the program once, and you now that the dependencies are already compiled, then just roscd to the package you need and do make.

Gscam

Captures images from gstreamer video sources and sends them to a ros topic.

  • Dependencies: libgtk2.0-dev
  • Test your webcam:
gst-launch --gst-debug=v4l2:2 v4l2src device=/dev/video1 ! video/x-raw-rgb,width=800,height=600,framerate=30/1 ! ffmpegcolorspace ! xvimagesink 
  • You should see a windows with the webcam image. Close this program.
  • Start using gscam:
export GSCAM_CONFIG="v4l2src device=/dev/video0 ! video/x-raw-rgb,width=800,height=600,framerate=30/1 ! ffmpegcolorspace ! identity name=ros ! fakesink"
rosrun gscam gscam
  • To look at the image:
rosmake image_view
rosrun image_view image_view image:=/gscam/image_raw

Improving image quality with some cameras

With a Logitech webcam C600 one can get better image quality (less noisy) setting the video mode to YUV. In gst-launch:

gst-launch --gst-debug=v4l2:5 v4l2src device=/dev/video1 ! video/x-raw-yuv,format=\(fourcc\)YUY2,width=800,height=600,framerate=15/1 ! ffmpegcolorspace ! xvimagesink

For gscam:

export GSCAM_CONFIG="v4l2src device=/dev/video1 ! video/x-raw-yuv,width=800,height=600,framerate=15/1,format=(fourcc)YUY2 ! ffmpegcolorspace ! video/x-raw-rgb ! identity name=ros ! fakesink"

the last conversion is necessary because gscam only takes rgb images. Please carefully notice that for the gscam GSCAM_CONFIG export there are no back-slashes for the format “fourcc” part.

One can use gst-inspect to check the capabilities of the different gstreamer filters.

Camera calibration

  • Compile and calibrate camera. (you need to be running gscam before)
rosmake camera_calibration
rosrun camera_calibration cameracalibrator.py --size 5x4 --square 0.02464 image:=/gscam/image_raw camera:=/gscam
  • Last command uses the calibration board that comes with the pr2 robot.
  • Move the board until the calibration button activates, try to move slow so that the calibrator don't chose any blurred image, also move the board to the corners of the image, this is where the distortion is more evident.
  • Save the calibration. This will create a file in /tmp with the calibration parameters inside.
  • Commit the calibration. This will create a file called camera_parameters.txt one directory up where gscam is running.
  • Run the image distorter:
export ROS_NAMESPACE=gscam
rosrun image_proc image_proc
  • To view the results:
rosrun image_view image_view image:=/gscam/image_rect_color

Calibration manually

The algorithm that selects the pictures in camera_calibration is not perfect, and it prefers to select blur images over sharp, also most of the times it doesn't use enough pictures. So here we explain how to use your own pictures to calibrate.

  • Run image_view and take the pictures that you consider adequate for calibration using the left clicking.
cd /tmp/
rosrun image_view image_view image:=/gscam/image

Images are store in the current directory.

  • Run the calibration from disk images:
rosrun camera_calibration camera_calibrate_from_disk.py --size 8x6 --square 0.0247881 /tmp/frame00*
  • This will print the parameters to screen. Replace them in one camera_calibration.txt example file. (look in gscam directory).

Markers tracking

Artoolkit in ros

roscd; cd ../stacks
git clone http://robotics.ccny.cuny.edu/git/ccny-ros-pkg.git
  • For ar_pose we will need to apply a patch. Download the patch. And apply it (against commit 8c72f6dc30f698749f7302d3b5690af4350e1c5b):
cd ccny-ros-pkg
cat directory_to_patch/ar_pose.patch | patch -p1
  • Lets compile ar_pose.
rosmake artoolkit
rosmake ar_pose
  • Add the markers that you want to detect in the file data/object_data2. For example:
4x4_23
data/4x4/4x4_23.patt
25.0
0.0 0.0

First is the name of the marker, second is the file of the marker, then the size in mm, then the relative position?

  • run ar_pose:
rosrun ar_pose ar_multi /usb_cam/camera_info:=/gscam/camera_info /usb_cam/image_raw:=/gscam/image_rect
  • To look at the markers detected:
rostopic echo /visualization_marker

Getting images from yarp to ros

  • Get tum-ros-internal repository and compile yarp2 and yarp_to_ros_image: (you need git access to our repo, tum-ros-pkg of the ros website is not complete and doesn't include all the stuff that is on tum-ros-internal)
roscd; cd ../stacks
git clone gitosis@git9.in.tum.de:tumros-internal
rosmake yarp2
rosmake yarp_to_ros_image
  • Running yarp_to_ros_image package:
rosrun yarp_to_ros_image yarp_to_ros_image

Hand/arm/head movement

Arm control system

We use a closed loop inverse kinematics system which integrates a vector field in the task space part of the controller. We use two controllers, one for each arm. Both controllers can control the torso, but a high level arbiter have to make sure that only one is taking control at the same moment of time.

  • Install and configure mercurial:
sudo apt-get install mercurial kdiff3
  • Create a file in your home called .hgrc and put the following inside it:
[ui]
username = Name Lastname <email_address>
merge = kdiff3
[extensions]
hgk=
  • Download the oid5 repository:
mkdir -p local/src/repositories/
cd local/src/repositories
hg clone ssh://repo@nibbler.informatik.tu-muenchen.de//work/repos/hg/oid5
  • Install dependencies: libace-dev swig libgsl0-dev libeigen2-dev sip4
  • Compile and install yarp and orocos-kdl
cd tools/yarp
make -j20
make install
cd ../../tools/kdl
make -j20
make install
cd 
cd local/DIR
xstow yarp
xstow kdl
  • Install the dependencies for the arm motion controller: python-numpy python-vtk python-qt4 rlwrap python-gtk2 python-gtkglext1
  • Run it in simulation mode:
cd local/src/repositories/oid5/control/motionControl
./system_start.sh -r icub -i right -s -d
  • It should run a simulation windows with one iCub arm and when you press “y” it should move the end effector in a straight line to a goal.
  • To stop the simulation:
./kill.sh

Setting and configuring everything for the iCub

Calibrate the icub cameras

  • Get icub images in ros:
roscd yarp_to_ros_image
rosrun yarp_to_ros_image yarp_to_ros_image
yarp connect /icub/cam/left /yarp_to_ros_image
  • Calibrate icub cameras (Look above). Please use the manual procedure, and please use at least 80 non-blurred pictures. Be sure to update the camera_calibration.txt file with the printed values, then mv this file to the yarp_to_ros_image directory.

Detecting markers

  • Run yarp to ros image module:
roscd yarp_to_ros_image
rosrun yarp_to_ros_image yarp_to_ros_image

This will use the camera_calibration.txt file that is in the yarp_to_ros_image directory.

  • Connect icub camera image to yarp_to_ros_image module:
yarp connect /icub/cam/left /yarp_to_ros_image
  • Run image_proc to undistort the image
export ROS_NAMESPACE=yarp_to_ros_image
rosrun image_proc image_proc image_raw:=image
  • Run ar_pose for markers detection:
rosrun ar_pose ar_multi /usb_cam/camera_info:=/yarp_to_ros_image/camera_info /usb_cam/image_raw:=/yarp_to_ros_image/image_rect
  • Markers detected can be read with:
rostopic echo /ar_multi/visualization_marker
  • Start rviz and add the tf module.
  • Put some markers in front of the camera, so that they get detected. In this moment rviz will recognize the markers frames and the camera frames. Set the Fixed and target frame to /r_eye3. Then you will see the frames of the markers in rviz.

Running the demo

  • Start the iCub, power supplies, icub laptop, cpu/motors switches, in the icub laptop run ./icub_up.sh. Wait until the script runs completely.
  • Configure all the related computers to look for yarp server in the icub laptop and for the roscore server in lars. Check that a roscore server is running in lars. All computers must be in the iCub network.

Markers detection

roscd icub_bringup
roslaunch icub_marker.launch
  • In another console run:
roscd tf_yarp_bridge
./calib_map.py

Lego gaze follower

  • In another console:
roscd tf_yarp_bridge
./tf_yarp_bridge.py
  • In another console:
cd ~/local/src/repositories/oid5/robots/iCub/gaze_follower/
./start_lego_demo_gaze_follower.sh
cd ~/local/src/repositories/oid5/perception/lego/
./start_lego_demo.sh

Arm motion controllers

cd ~/local/src/repositories/oid5/control/motionControl/
./system_start.sh -r icub -i right
./system_start.sh -r icub -i left

Hand graspers

cd ~/local/src/repositories/oid5/control/handControl
./start_lego_demo_hand_control.sh

Lego state machine

cd ~/local/src/repositories/oid5/control/motionControl/
./lego_demo.py
 
fastdev/icub_lego_demo.txt · Last modified: 2010/09/13 14:40 by memeruiz · [Old revisions]
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki