Skip to content

Modeling self-recognition in the mirror on Nao humanoid robot.

Notifications You must be signed in to change notification settings

matejhof/nao-mirror-self-recog

Repository files navigation

Tutorial for working with Nao in gazebo 9 and ROS melodic for self-recognision in mirror. For additional info you can emai me to [email protected]


After installing required software (Gazebo9 ROS melodic, SDK simulator and Naoqi python SDK (softbankrobotics - Matej can give you username and password) along with Boost1.55.0, clone this repository to your computer.

  1. Install catkin tools - this site
  2. Install explauto library - library for controlling nao with learned inverse models (will not build without it due to dependencies) - this site
  3. go to catkin_ws
  4. Build workspace - catkin build
  5. Source - needs to be done whenever you open new terminal for launching gazebo and ROS or you can add it to ~/.bashrc once: source /opt/ros/melodic/setup.sh source ~/code-nao-simulation/gazebo9/catkin_ws/devel/setup.bash
  6. Go to misc/
  7. Launch gazebo world with Nao in it: bash launch-naoqi-highres.sh

changes to nao_skin.xacro - For both versions of nao (highres and lowres) go to nao_skin.xacro and all the way down to specification of controllers and plugins and in change filename to path to library libcontact.so in your computer. Without this change the artificial skin will not work.

After these steps Gazebo Gui with spawned Nao should pop up, simulation should be paused and Nao will have only Torso, arms and head with camera for mirroring right infront of him (white box).

Changes to Nao

  1. Removed fingers due to simulation issues with them, fingers are replaced with one huge finger with no joints.
  2. Legs removed - they were redundant for our purposes
  3. Added camera for creating a 'mirror'.
  4. Changed color of casing
  5. Fixed torso in space so Nao doesn't fall to the ground and remains stable

Versions of Nao's skin

  1. High resulotin - launching through aforementioned command "bash launch-naoqi-highres.sh" Description in .xacro files in catkin_ws/src/nao_robot/nao_description/urdf/naoSkin2_generated_urdf/

  2. Low resolution -launching through command "bash launch-naoqi-lowres.sh" Description in .xacro files in catkin_ws/src/nao_robot/nao_description/urdf/naoSkin_generated_urdf/

Changes to Nao can be done throughout these files, mainly in nao_robot.xacro, for example legs can be added by uncommenting include for nao_legs.xacro along with uncommenting legs transmission in naoTransmission.xacro, also look into launch files if controllers for legs are included.

Controllers for legs are included in nao_control_position.launch and are not included in nao_control_position_skin.launch.

Controlling Nao during the simulation

This is done through python scripts, one of the simplest example is catkin_ws/src/my_executables/scripts/camera_listener.py which subscribes to camera topic and flips its image along the y axis so it seems like a mirror image. Once the simulation is runnning, unpause the simulation, open new terminal and run command: "rosrun my_executables camera_listener.py" . It should capture images and save them.

Sensor data and joints commands are communicated through ROS topics that a node can subscribe to (for data) or publish to (for controlling the robot). List of ros topics can be displayed by using command in terminal "rostopic list". Any Topic that is listed can be published and subscribed to.

More complex example of manipulation of robot is head_babbling.py in the same folder - this example includes loading trained explauto inverse models and works with Naoenvironment.

About

Modeling self-recognition in the mirror on Nao humanoid robot.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published