[Inquiry] Behavior-1K Demo Collection #1072
Unanswered
Jinseong1126
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello OmniGibson Team,
I am Jinseong Jeong from the MIIL Lab at Korea University. First of all, thank you for developing and sharing the excellent OmniGibson simulator. I have been closely following the latest developments, including the integration of cuRobo motion planning in the og-develop branch.
From various discussions on GitHub and Discord, I understand that Behavior-1K demos have either already been collected or are planned to be collected. If there are any existing demonstrations specifically for OmniGibson, or if you plan to gather more in the near future, I would greatly appreciate any guidance on how to access or utilize those demos, as well as whether there is a way for us to contribute. We hope to publicly release demos alongside a future publication, hoping it might align with OmniGibson’s goals.
I am aware that the previously released Behavior-100 dataset—collected in iGibson via VR—was extremely well done, and I believe it would be highly meaningful to attempt a similar approach to gather Behavior-1K demos in OmniGibson.
Currently, our lab is trying to collect demonstrations via VR teleoperation with TeleMoMa or using cuRobo motion planning. However, we have found it quite challenging to efficiently gather large-scale data with either method. In order to assemble a rich dataset at the level of Behavior-1K, could you please share any recommended approaches, tips, or insights into the methods you are using—whether VR teleoperation, motion planning, or anything else—that might help speed up or streamline the demo collection process?
Once again, thank you for your outstanding work and for making it available to the community. We hope our research and efforts will contribute to the OmniGibson community and to the field of robotics and AI. We look forward to hearing from you.
Sincerely,
Jinseong Jeong
MIIL Lab, Korea University
Beta Was this translation helpful? Give feedback.
All reactions