-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some tips on ROS #10
Comments
Thanks Martin, I'm not using a ROS camera driver. How do I align the images with the NVIDIA Jetson Inference deep learning code camera feed? |
Yes, but why not? It's really useful. For example, you can use the camera image in other ROS nodes, display it in RViz and so on.
Looking at your code, you currently do the following: img, width, height = camera.CaptureRGBA()
detections = net.Detect(img, width, height) The first line retrieves the image in If you're lucky, you could do it like this: from cv_bridge import CvBridge
# [...]
class VisionNode(object):
# [...]
def callback(color_image, depth_image, camera_info):
cv_bridge = CvBridge()
img_cv = cv_bridge.imgmsg_to_cv2(color_image, 'rgba8')
detections = net.Detect(img_cv, color_image.width, color_image.height)
array([[[ 37, 45, 34, 255],
[ 40, 48, 37, 255],
[ 46, 53, 45, 255],
...,
[145, 140, 148, 255],
[135, 130, 138, 255],
[122, 117, 125, 255]],
[[ 37, 45, 34, 255],
[ 39, 47, 36, 255],
[ 44, 49, 39, 255],
...,
[141, 137, 142, 255],
[128, 123, 131, 255],
[115, 110, 118, 255]],
[[ 35, 40, 30, 255],
[ 42, 47, 37, 255],
[ 45, 50, 40, 255],
...,
[137, 131, 137, 255],
[129, 124, 132, 255],
[108, 103, 111, 255]],
...,
[[125, 135, 142, 255],
[121, 131, 138, 255],
[124, 131, 137, 255],
...,
[ 42, 49, 53, 255],
[ 42, 49, 53, 255],
[ 40, 47, 51, 255]],
[[121, 126, 135, 255],
[124, 129, 138, 255],
[125, 130, 137, 255],
...,
[ 42, 49, 53, 255],
[ 43, 50, 54, 255],
[ 42, 49, 53, 255]],
[[122, 124, 134, 255],
[127, 129, 139, 255],
[129, 128, 137, 255],
...,
[ 40, 48, 49, 255],
[ 42, 50, 51, 255],
[ 40, 48, 49, 255]]], dtype=uint8) ... and its type is |
Cool project! I watched your video and really enjoyed it. While watching, I realized that I've seen several other of your videos over the years and liked them a lot.
Your skills in many areas surpass my own by far. However, since I've been developing ROS software for a living for 10 years, this is one area where I figured I could help, and I couldn't resist giving you some unsolicited advice, so here goes.
Aligning depth and color images
Instead of this hackery:
ReallyUsefulRobot/Vision/camera11_depth.py
Lines 100 to 101 in 8775ac2
... you should either use
pyrealsense2.rs2_project_color_pixel_to_depth_pixel
, or perhaps better yet don't use thepyrealsense2
library, but instead use therealsense2_camera
ROS driver something like this (adjust width + height as needed):From the realsense-ros README:
Since we set the
camera
parameter tod435
, this will be/d435/aligned_depth_to_color/image_raw
instead.Then you can use a TimeSynchronizer to subscribe to the
/d435/aligned_depth_to_color/image_raw
and/d435/color/image_raw
topics, and they will be aligned pixel-by-pixel; in other words, the depth image will be transformed to look as if the depth camera was exactly at the RGB camera's position and also as if it had the same camera parameters (lens distortion etc.) as the RGB camera.Removing lens distortion
There should also be two topics called
.../image_rect_raw
and.../image_rect_color
. These are the rectified topics, i.e. any lens distortion has been removed. You should use those if you're going to use some trigonometry like you showed in the video.Using TF
Regarding the trigonometry calculations, you can make your life much easier by using the
tf2_ros
library. There are some tutorials that are 100% worth going over. TF is one part where ROS really shines.In short, you should:
joint_state_publisher
from yourrur_bringup
launch file.sensor_msgs/JointState
message on the topic/joint_states
instead of astd_msgs/UInt16
as you're doing now. You can leave thevelocity
andeffort
fields empty, just fill in thename
field with the name of the joint in the URDF and theposition
field with the joint angle in radians.joint_states
topic can have multiple independent publishers, they don't all need to be in one message.joint_states
for them.This will allow the
robot_state_publisher
to pick up the actual joint states from your robot, display them in RViz and (most importantly) publish correct TFs for them.Now is where the magic happens: You can now automatically transform the center point of your cup into any frame (
base_link
, or your gripper frame, or whatever). Disclaimer: I haven't tested the code below, but it should give you the right ideas:Controlling the arm
If you want to go in really deep, look at MoveIt. It's not going to be easy though!
Joint state publisher GUI
Bonus tip: If you use the
joint_state_publisher_gui
in yourdisplay.launch
like this......, it will pop up a little GUI where you can play around with the joints in your URDF. Very useful for debugging.
The text was updated successfully, but these errors were encountered: