You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to fine-tune the Octo model, but I don't know how to construct my own RLDS dataset. I have already built a reinforcement learning environment using dm_env, performed simulation with Isaac Gym, and generated an RLDS dataset using envlogger.
for i in range(FLAGS.num_episodes):
timestep = env.reset()
while True:
# TODO: HOW TO GENERATE ACTION
action = np.random.uniform(-3, 3, size=(9,)).astype(np.float32)
timestep = env.step(action)
gym_env.render()
gym_env.close()
However, for the action part, how should I control the robotic arm to generate grasping motion trajectories? Should it be learned through a reward function, controlled via a keyboard or gamepad, or through motion capture? I'm quite confused about this. Could someone tell me the general solution?
The text was updated successfully, but these errors were encountered:
I want to fine-tune the Octo model, but I don't know how to construct my own RLDS dataset. I have already built a reinforcement learning environment using dm_env, performed simulation with Isaac Gym, and generated an RLDS dataset using envlogger.
However, for the action part, how should I control the robotic arm to generate grasping motion trajectories? Should it be learned through a reward function, controlled via a keyboard or gamepad, or through motion capture? I'm quite confused about this. Could someone tell me the general solution?
The text was updated successfully, but these errors were encountered: