This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
fastdev:icub_lego_demo [2010/09/07 14:01] – memeruiz | fastdev:icub_lego_demo [2010/09/13 14:40] (current) – memeruiz | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== iCub lego demo ====== | ||
+ | |||
+ | ===== Installation ===== | ||
+ | |||
+ | Here are the steps taken to put the iCub to grab Lego pieces and hopefully put them together. | ||
+ | |||
+ | ===== Installing and setting up ROS in debian sid ===== | ||
+ | |||
+ | Take a look in the [[http:// | ||
+ | |||
+ | * Install python-setuptools, | ||
+ | * Install rosinstall: | ||
+ | |||
+ | sudo easy_install -U rosinstall | ||
+ | |||
+ | * Create a directory where you want to store ros: | ||
+ | |||
+ | mkdir -p local/ | ||
+ | cd local/src/ | ||
+ | |||
+ | * As of 10.08.2010 is better (more stable, sufficiently new) to use cturtle ros distribution. | ||
+ | |||
+ | rosinstall ros http:// | ||
+ | |||
+ | * In the last step rosinstall will try to download and compile cturtle. If you get some errors is because you may be missing some dependencies. Look at the error messages and find out the name of the packages that you have to install in debian, then try rosintall command again until you don't get any errors. | ||
+ | * Create a file called .bashrc.ros Put inside the following: | ||
+ | |||
+ | export ROS_ROOT=${HOME}/ | ||
+ | export PATH=${ROS_ROOT}/ | ||
+ | export PYTHONPATH=${ROS_ROOT}/ | ||
+ | export OCTAVE_PATH=${ROS_UP}/ | ||
+ | #if [ ! " | ||
+ | if [ ! " | ||
+ | export ROS_PACKAGE_PATH=${HOME}/ | ||
+ | #export ROS_STACK_PATH=${ROS_ROOT}: | ||
+ | | ||
+ | #source `rosstack find ias_semantic_mapping`/ | ||
+ | | ||
+ | NUM_CPUS=`cat / | ||
+ | let " | ||
+ | export ROS_PARALLEL_JOBS=" | ||
+ | | ||
+ | export ROS_LANG_DISABLE=roslisp: | ||
+ | | ||
+ | export ROS_IP=`ip addr show \`/ | ||
+ | #export ROS_IP=192.168.137.2 | ||
+ | # | ||
+ | #if [[ " | ||
+ | | ||
+ | alias kcart_left=" | ||
+ | alias kcart_right=" | ||
+ | | ||
+ | . ${ROS_ROOT}/ | ||
+ | |||
+ | * Adjust the ROS_MASTER_URI to point to the computer where the rosmaster that you want to use is. | ||
+ | * Inside of your .bashrc add de following line: | ||
+ | |||
+ | alias env_ros=' | ||
+ | |||
+ | * Logout and login again to reload your .bashrc | ||
+ | * Run env_ros | ||
+ | * Now you can use the ros utilities | ||
+ | ===== Installing extra ros packages ===== | ||
+ | |||
+ | In the ros directory there is a subdirectory called stacks this is where you can put extra packages. You just have to download somehow this packages and put them there. | ||
+ | |||
+ | Example: Gstreamer video adquisition: | ||
+ | |||
+ | * Search for gstreamer in http:// | ||
+ | * cd into the stacks directory | ||
+ | |||
+ | cd $ROS_ROOT | ||
+ | cd ../stacks | ||
+ | |||
+ | * Download the code there: | ||
+ | |||
+ | svn co http:// | ||
+ | |||
+ | * As of August 2010, gscam needs to be patch to export the frame id to be able to use tf and rviz to visualize anything with respect to gscam. The patch works for revision number 822 of the brown repository. Download the {{: | ||
+ | |||
+ | roscd | ||
+ | cd ../ | ||
+ | cat / | ||
+ | |||
+ | * Now you can compile the code. | ||
+ | |||
+ | rosmake gscam | ||
+ | |||
+ | * Rosmake deals with ros dependencies. It will automatically compile any other necessary ros packages that are dependencies. | ||
+ | * Rosmake is pretty slow checking dependencies, | ||
+ | |||
+ | |||
+ | ===== Gscam ===== | ||
+ | |||
+ | Captures images from gstreamer video sources and sends them to a ros topic. | ||
+ | |||
+ | * Dependencies: | ||
+ | * Test your webcam: | ||
+ | |||
+ | gst-launch --gst-debug=v4l2: | ||
+ | |||
+ | * You should see a windows with the webcam image. Close this program. | ||
+ | |||
+ | * Start using gscam: | ||
+ | |||
+ | export GSCAM_CONFIG=" | ||
+ | rosrun gscam gscam | ||
+ | |||
+ | * To look at the image: | ||
+ | |||
+ | rosmake image_view | ||
+ | rosrun image_view image_view image: | ||
+ | |||
+ | ==== Improving image quality with some cameras ==== | ||
+ | |||
+ | With a Logitech webcam C600 one can get better image quality (less noisy) setting the video mode to YUV. In gst-launch: | ||
+ | |||
+ | gst-launch --gst-debug=v4l2: | ||
+ | |||
+ | For gscam: | ||
+ | |||
+ | export GSCAM_CONFIG=" | ||
+ | |||
+ | the last conversion is necessary because gscam only takes rgb images. | ||
+ | Please carefully notice that for the gscam GSCAM_CONFIG export there are no back-slashes for the format " | ||
+ | |||
+ | One can use gst-inspect to check the capabilities of the different gstreamer filters. | ||
+ | |||
+ | |||
+ | |||
+ | ===== Camera calibration ===== | ||
+ | |||
+ | * Compile and calibrate camera. (you need to be running gscam before) | ||
+ | |||
+ | rosmake camera_calibration | ||
+ | rosrun camera_calibration cameracalibrator.py --size 5x4 --square 0.02464 image: | ||
+ | |||
+ | * Last command uses the calibration board that comes with the pr2 robot. | ||
+ | * Move the board until the calibration button activates, try to move slow so that the calibrator don't chose any blurred image, also move the board to the corners of the image, this is where the distortion is more evident. | ||
+ | * Save the calibration. This will create a file in /tmp with the calibration parameters inside. | ||
+ | * Commit the calibration. This will create a file called camera_parameters.txt one directory up where gscam is running. | ||
+ | * Run the image distorter: | ||
+ | |||
+ | export ROS_NAMESPACE=gscam | ||
+ | rosrun image_proc image_proc | ||
+ | |||
+ | * To view the results: | ||
+ | |||
+ | rosrun image_view image_view image: | ||
+ | |||
+ | ==== Calibration manually ==== | ||
+ | |||
+ | The algorithm that selects the pictures in camera_calibration is not perfect, and it prefers to select blur images over sharp, also most of the times it doesn' | ||
+ | |||
+ | * Run image_view and take the pictures that you consider adequate for calibration using the left clicking. | ||
+ | |||
+ | cd /tmp/ | ||
+ | rosrun image_view image_view image: | ||
+ | |||
+ | Images are store in the current directory. | ||
+ | |||
+ | * Run the calibration from disk images: | ||
+ | |||
+ | rosrun camera_calibration camera_calibrate_from_disk.py --size 8x6 --square 0.0247881 / | ||
+ | |||
+ | * This will print the parameters to screen. Replace them in one camera_calibration.txt example file. (look in gscam directory). | ||
+ | |||
+ | ===== Markers tracking ===== | ||
+ | |||
+ | ==== Artoolkit in ros ==== | ||
+ | |||
+ | roscd; cd ../stacks | ||
+ | git clone http:// | ||
+ | |||
+ | * For ar_pose we will need to apply a patch. Download the {{: | ||
+ | |||
+ | cd ccny-ros-pkg | ||
+ | cat directory_to_patch/ | ||
+ | |||
+ | * Lets compile ar_pose. | ||
+ | |||
+ | rosmake artoolkit | ||
+ | rosmake ar_pose | ||
+ | |||
+ | * Add the markers that you want to detect in the file data/ | ||
+ | |||
+ | 4x4_23 | ||
+ | data/ | ||
+ | 25.0 | ||
+ | 0.0 0.0 | ||
+ | |||
+ | First is the name of the marker, second is the file of the marker, then the size in mm, then the relative position? | ||
+ | |||
+ | * run ar_pose: | ||
+ | |||
+ | rosrun ar_pose ar_multi / | ||
+ | |||
+ | * To look at the markers detected: | ||
+ | |||
+ | rostopic echo / | ||
+ | |||
+ | ===== Getting images from yarp to ros ===== | ||
+ | |||
+ | * Get tum-ros-internal repository and compile yarp2 and yarp_to_ros_image: | ||
+ | |||
+ | roscd; cd ../stacks | ||
+ | git clone gitosis@git9.in.tum.de: | ||
+ | rosmake yarp2 | ||
+ | rosmake yarp_to_ros_image | ||
+ | |||
+ | * Running yarp_to_ros_image package: | ||
+ | |||
+ | rosrun yarp_to_ros_image yarp_to_ros_image | ||
+ | |||
+ | ===== Hand/ | ||
+ | |||
+ | ==== Arm control system ==== | ||
+ | |||
+ | We use a closed loop inverse kinematics system which integrates a vector field in the task space part of the controller. We use two controllers, | ||
+ | |||
+ | * Install and configure mercurial: | ||
+ | |||
+ | sudo apt-get install mercurial kdiff3 | ||
+ | |||
+ | * Create a file in your home called .hgrc and put the following inside it: | ||
+ | |||
+ | [ui] | ||
+ | username = Name Lastname < | ||
+ | merge = kdiff3 | ||
+ | [extensions] | ||
+ | hgk= | ||
+ | |||
+ | * Download the oid5 repository: | ||
+ | |||
+ | mkdir -p local/ | ||
+ | cd local/ | ||
+ | hg clone ssh:// | ||
+ | |||
+ | * Install dependencies: | ||
+ | * Compile and install yarp and orocos-kdl | ||
+ | |||
+ | cd tools/yarp | ||
+ | make -j20 | ||
+ | make install | ||
+ | cd ../ | ||
+ | make -j20 | ||
+ | make install | ||
+ | cd | ||
+ | cd local/DIR | ||
+ | xstow yarp | ||
+ | xstow kdl | ||
+ | |||
+ | * Install the dependencies for the arm motion controller: python-numpy python-vtk python-qt4 rlwrap python-gtk2 python-gtkglext1 | ||
+ | * Run it in simulation mode: | ||
+ | |||
+ | cd local/ | ||
+ | ./ | ||
+ | |||
+ | * It should run a simulation windows with one iCub arm and when you press " | ||
+ | * To stop the simulation: | ||
+ | |||
+ | ./kill.sh | ||
+ | |||
+ | |||
+ | ===== Setting and configuring everything for the iCub ===== | ||
+ | |||
+ | ==== Calibrate the icub cameras ==== | ||
+ | |||
+ | * Get icub images in ros: | ||
+ | |||
+ | roscd yarp_to_ros_image | ||
+ | rosrun yarp_to_ros_image yarp_to_ros_image | ||
+ | yarp connect / | ||
+ | |||
+ | * Calibrate icub cameras (Look above). Please use the manual procedure, and please use at least 80 non-blurred pictures. Be sure to update the camera_calibration.txt file with the printed values, then mv this file to the yarp_to_ros_image directory. | ||
+ | |||
+ | ==== Detecting markers ==== | ||
+ | |||
+ | * Run yarp to ros image module: | ||
+ | |||
+ | roscd yarp_to_ros_image | ||
+ | rosrun yarp_to_ros_image yarp_to_ros_image | ||
+ | |||
+ | This will use the camera_calibration.txt file that is in the yarp_to_ros_image directory. | ||
+ | |||
+ | * Connect icub camera image to yarp_to_ros_image module: | ||
+ | |||
+ | yarp connect / | ||
+ | |||
+ | * Run image_proc to undistort the image | ||
+ | |||
+ | export ROS_NAMESPACE=yarp_to_ros_image | ||
+ | rosrun image_proc image_proc image_raw: | ||
+ | |||
+ | * Run ar_pose for markers detection: | ||
+ | |||
+ | rosrun ar_pose ar_multi / | ||
+ | |||
+ | * Markers detected can be read with: | ||
+ | |||
+ | rostopic echo / | ||
+ | |||
+ | * Start rviz and add the tf module. | ||
+ | * Put some markers in front of the camera, so that they get detected. In this moment rviz will recognize the markers frames and the camera frames. Set the Fixed and target frame to /r_eye3. Then you will see the frames of the markers in rviz. | ||
+ | |||
+ | ===== Running the demo ===== | ||
+ | |||
+ | * Start the iCub, power supplies, icub laptop, cpu/motors switches, in the icub laptop run ./ | ||
+ | * Configure all the related computers to look for yarp server in the icub laptop and for the roscore server in lars. Check that a roscore server is running in lars. All computers must be in the iCub network. | ||
+ | |||
+ | ==== Markers detection ==== | ||
+ | |||
+ | roscd icub_bringup | ||
+ | roslaunch icub_marker.launch | ||
+ | |||
+ | * In another console run: | ||
+ | |||
+ | roscd tf_yarp_bridge | ||
+ | ./ | ||
+ | |||
+ | ==== Lego gaze follower ==== | ||
+ | |||
+ | * In another console: | ||
+ | |||
+ | roscd tf_yarp_bridge | ||
+ | ./ | ||
+ | |||
+ | * In another console: | ||
+ | |||
+ | cd ~/ | ||
+ | ./ | ||
+ | |||
+ | cd ~/ | ||
+ | ./ | ||
+ | |||
+ | ==== Arm motion controllers ==== | ||
+ | |||
+ | cd ~/ | ||
+ | ./ | ||
+ | ./ | ||
+ | |||
+ | ==== Hand graspers ==== | ||
+ | |||
+ | cd ~/ | ||
+ | ./ | ||
+ | |||
+ | ==== Lego state machine ==== | ||
+ | |||
+ | cd ~/ | ||
+ | ./ | ||
+ | |||
+ | |||
+ | |||