This quickstart guide outlines how to get started with the SubT Tunnel and Urban Circuit Datasets. For additional details, please refer to our paper about the previous Tunnel circuit. ICRA paper.
These datasets were collected by the Army Research Laboratory on behalf of DARPA to support further system development via offline component testing in a relevant environment.
The SubT urban dataset consists of four ROS bag files which were recorded on our "GVRbot", which is a modified iRobot PackBot Explorer developed at the Ground Vehicle Systems Center (GVSC), formerly known as TARDEC. This robot is a tracked skid-steer chassis equipped with forward mounted flippers to assist with stair descent as well as traversing taller obstacles. The sensor loadout is similar to the Husky described below in the Tunnel section. The robot is equipped with an Ouster OS1-64 LiDAR mounted in an elevated placement to avoid self-occlusion. The robot is also equipped with a Multisense SL which provides a secondary LiDAR system as well as stereo vision and illumination. We have also added an SCD-30 CO2 sensor for the gas artifact. Unfortunately, the Thermal IR camera(s) mounted on the robots did not record useable data due to a compression parameter mistake.
The message files used to handle the SCD-30 data can be found in the support code described in the tunnel section below. Note that it has been restructured somewhat since the Tunnel circuit to include an additional workspace to support the Kimera VIO analysis which is still a work in progress. There are now two workspaces under the subt_reference_datasets project. Users should probably only focus on the algorithm_ws for now.
On this release, the bag files are compressed with the lz4 option to greatly reduce their size for transmission. On our own analysis, decompressing them during playback is too slow, so they should be decompressed by the user via rosbag decompress prior to use.
Bag file description and links:
Alpha course, upper floor. Configuration 2: https://subt-data.s3.amazonaws.com/SubT_Urban_Ckt/a_lvl_1.bag
Alpha course, lower floor. Configuration 2. Robot goes down the stairs shortly after start: https://subt-data.s3.amazonaws.com/SubT_Urban_Ckt/a_lvl_2.bag
Beta course, upper floor. Configuration 2: https://subt-data.s3.amazonaws.com/SubT_Urban_Ckt/b_lvl_1.bag
Beta course, lower floor. Goes pretty far to get to stairs. Ouster data not available due to equipment failure (DC converter): https://subt-data.s3.amazonaws.com/SubT_Urban_Ckt/b_lvl_2.bag
Alpha course UAV: https://subt-data.s3.amazonaws.com/SubT_Urban_Ckt/a_lvl_1_uav.bag
Beta course UAV: https://subt-data.s3.amazonaws.com/SubT_Urban_Ckt/b_lvl_1_uav.bag
Support data for the analysis from the ICRA paper is being processed and will be available at this link: https://subt-data.s3.amazonaws.com/SubT_Urban_Ckt/support.tgz
Full PointCloud: https://bitbucket.org/subtchallenge/urban_ground_truth/src/master/
The SubT tunnel dataset consists of three ROS bag files which were recorded on our Clearpath Husky robot during teleoperation within the Safety Research (SR) and Experimental (EX) courses. At present, only Configuration B is represented in the dataset due to technical difficulties involved in the early collection process. The dataset consists of two runs in the SR course and one in the EX course.
Bag files have been compressed by "rosbag compress" to reduce download time. They should still play back fine from their compressed state; however, if excessive stuttering or reduced performance is observed, the user can decompress the bag file to their full size (roughly 2x) with "rosbag decompress".
Bag file data can be retrieved from these links:
README/usage/etc: https://subt-data.s3.amazonaws.com/SubT_Tunnel_Ckt/usage.txt
Support data (ground truth, object annotations):
https://subt-data.s3.amazonaws.com/SubT_Tunnel_Ckt/support.tgz
Bag files:
https://subt-data.s3.amazonaws.com/SubT_Tunnel_Ckt/ex_B_route1.bag (33 GB)
https://subt-data.s3.amazonaws.com/SubT_Tunnel_Ckt/sr_B_route1.bag (19.6 GB)
https://subt-data.s3.amazonaws.com/SubT_Tunnel_Ckt/sr_B_route2.bag (16.3 GB)
Full PointCloud: https://bitbucket.org/subtchallenge/tunnel_ground_truth/src/master/
First, download the public catkin workspace from:
Option 1: Download the complete docker image. After completing this set, you can skip to the Examples section.
docker pull acschang/subt_reference_datasets:urban
Option 2: Download the workspace. After completing this step, please continue by completing either a docker or native installation. git clone [email protected]:subtchallenge/subt_reference_datasets.git
Option 1: Build the docker image: clone workspace inside of image
Note: Downloads the workspace and requires working exclusively within the docker container.
cd subt_reference_datasets/docker
./build.bash subt_reference_datasets_deploy/
./run.bash subt_reference_datasets_deploy/
Option 2: Build the docker image: mount workspace inside of image:
Note: Allows one to modify files outside of the docker container for use in the docker container.
cd subt_reference_datasets/docker
./build.bash subt_reference_datasets_devel/
./run.bash subt_reference_datasets_devel/ ~/subt_reference_datasets/
cd other/subt_reference_datasets/
Then follow the Native Installation instructions below.
Note:
YOUR_ROS_CATKIN_WORKSPACE
is typically/opt/ros/melodic
if you are not extending another workspace.
cd subt_reference_datasets
wstool update -t analysis_ws/src
rosdep install -y -r --from-paths algorithm_ws/src --ignore-src --rosdistro melodic
cd analysis_ws
catkin init
catkin config --extend YOUR_ROS_CATKIN_WORKSPACE --merge-devel --cmake-args -DCMAKE_BUILD_TYPE=Release
catkin build
cd subt_reference_datasets
wstool update -t algorithm_ws/src
rosdep install -y -r --from-paths algorithm_ws/src --ignore-src --rosdistro melodic
cd algorithm_ws
catkin init
catkin config --extend ../analysis_ws/devel --merge-devel --cmake-args -DCMAKE_BUILD_TYPE=Release
catkin build
Note: If compiling LeGO-LOAM returns conflicting declaration errors between FLANN and Lz4, look at this potential fix: ethz-asl/lidar_align#16 (comment)
cd subt_reference_datasets
wstool update -t kimera_ws/src
rosdep install -y -r --from-paths kimera_ws/src --ignore-src --rosdistro melodic
cd kimera_ws
catkin init
catkin config --extend ../analysis_ws/devel --merge-devel --cmake-args -DCMAKE_BUILD_TYPE=Release
catkin build
There are some example test cases included in the repository. These examples are located in the data/tunnel
and data/urban
directories. There are scripts that may be run to download a few bags from AWS to get you started (sr_B_route1.bag
, a_lvl_1.bag
, and a_lvl_1_uav.bag
). There are also scripts compatible with these bags that run all included algorithms on the downloaded bags. A complete list of commands for each circuit is included in the run_commands.txt
file in each directory. These commands require the associated bags be decompressed prior to running these commands as the RTF is set to 1.
cd data/tunnel
. download_sr_b_r1.sh
. sample_run.sh
cd data/urban
. download_alpha_1_ugv_uav.sh
. sample_run.sh
Source the workspace containing the algorithms for experimentation: i.e. . ~/subt_reference_datasets/algorithm_ws/devel/setup.bash
or . ~/subt_reference_datasets/kimera_ws/devel/setup.bash
Go to the directory where you have placed the tunnel circuit bag files, in this case the bags are in the data
folder sorted into tunnel
and urban
folders
cd ~/data/tunnel_ckt
roslaunch tunnel_ckt_launch remap.launch bag:=sr_B_route2.bag rate:=2.0 odom_only:=true course:=sr config:=B
Arguments:
"bag" : Non-optional argument, specify the bag file to open for this run. This should be specified as a relative path to where CWD where roslaunch is started (it is composed with PWD)
"rviz" : Boolean parameter determining whether to launch the bag in a separate xterm window and launch RViz or to run everything in the current terminal and not launch RViz.
"name" : Default "chinook", "sherman", or "uav" matches robot name used in dataset collection.
"reproject" : Optionally reproject ouster point cloud using new settings. We may provide our ouster projection node at a later date; otherwise, the user may substitute their own or find another alternative.
"reodom" : Attempt to re-generate the platform odometry using joystick commands, to correct poor recorded odometry in configuration A bagfiles (not provided). This is experimental and should not be needed for configuration B runs.
"rate": Bag play rate multiplier.
"mark_artifacts": When set to true, the subt_scoring node will be run in marking mode, which will provide the user with an interface to code the location of artifacts for automatic scoring/ RMSE calculation. Artifacts are already coded in the coded_artifacts directory; however, users may wish to improve the coding as some artifacts were missed.
"bag_out": If true, capture an output bag file. Currently configured with our internal mapping outputs. Users should modify the rosbag record node to capture relevant data.
"course": should be either "ex" for experimental, or "sr" for safety research
"config": can be either A or B. Note that all bag files were taken in configuration B (for now).
"initialize_flat": can either be true or false. Setting this parameter to true will fix the roll and pitch between the map frame and the DARPA frame to zero. Course Alpha in the Urban Circuit dataset has execptionally poor roll and pitch alignment with the DARPA frame so setting this parameter to true for Course Alpha bag files is recommended.
"interval": double. The rate at which to republish the odometry transform as a nav_msg. This is typically used in conjunction with Cartographer for bags from the Urban Circuit dataset.
"omnimapper"
"cartographer"
"odom_only"
These parameters optionally switch on up to one mapping system. The user should be able to source into a catkin workspace from the "subt hello world" virtual challenge codebase to get Cartographer, or set it up on their own. "odom_only" will give the results of using no mapping system by substituting a static map to odom transform. Each of these options configures the subt_scoring node to write an RMSE output file.
Scoring and RMSE computation are automatically performed by the subt_scoring node, assuming the user's mapping approach provides the map -> chinook/odom transform. The node should automatically compute the DARPA frame to map frame correction by aligning to the fiducial landmarks.
The scoring node works by reading the fiducial_ex or fiducial_sr file, the ground truth file for the course configuration, and a coding file with timestamped local artifact detections. Fiducial tracks were automatically inserted into the coding file. The global "darpa" to map frame transform is established when 3 fiducials have been observed and is revised when the fourth fiducial is observed. Fiducial tracking was performed automatically by the coding node and is inserted into the coding file. Only the final observation of each fiducial is used, under the assumption that this one is the closest to the vehicle and therefore will have the best range accuracy. Scores are updated when an artifact is observed via composing the darpa -> map frame + the user's map-> chinook/odom frame correction with the recorded chinook/odom -> chinook/base transform and the coded local position. This resulting global position is compared with the ground truth file; a point is scored if the position is within 5 meters of an artifact with the same label. RMSE is updated by all artifact reports even if they are too inaccurate to achieve a scored point.
The scoring node works by reading the fiducial_ex or fiducial_sr file, the ground truth file for the course configuration, and a coding file with timestamped local artifact detections. Fiducial tracks were automatically inserted into the coding file. The global "darpa" to map frame transform is established when 3 fiducials have been observed and is revised when the fourth fiducial is observed. Fiducial tracking was performed automatically by the coding node and is inserted into the coding file. Only the final observation of each fiducial is used, under the assumption that this one is the closest to the vehicle and therefore will have the best range accuracy. Scores are updated when an artifact is observed via composing the darpa -> map frame + the user's map-> chinook/odom frame correction with the recorded chinook/odom -> chinook/base transform and the coded local position. This resulting global position is compared with the ground truth file; a point is scored if the position is within 5 meters of an artifact with the same label. RMSE is updated by all artifact reports even if they are too inaccurate to achieve a scored point.
Some other examples of launch commands are as follows:
Odom Only:
roslaunch tunnel_ckt_launch remap.launch bag:=sr_B_route2.bag odom_only:=true course:=sr config:=B
roslaunch urban_ckt_launch remap.launch bag:=a_lvl_1.bag odom_only:=true course:=alpha config:=2
Cartographer:
roslaunch tunnel_ckt_launch remap.launch bag:=sr_B_route2.bag cartographer:=true noodom:=true course:=sr config:=B
roslaunch urban_ckt_launch remap.launch bag:=a_lvl_1.bag cartographer:=true noodom:=true odom_mode:=odom rate:=1 odom_config_1:=true course:=alpha config:=2 interval:=0.1 initialize_flat:=true
ORB-SLAM2:
roslaunch tunnel_ckt_launch remap.launch bag:=sr_B_route2.bag orbslam:=true course:=sr config:=B
roslaunch urban_ckt_launch remap.launch bag:=a_lvl_1.bag orbslam:=true course:=alpha config:=2
Note: If you are utilized the compressed bags and the /clock
topic is intermittent, use a playback rate of <1. I recommend a playback rate of 0.25 to 0.5
The STIX datasets were collected with a refurbished iRobot Packbot Explorer, which has been designated "GVRBot". This robot has been augmented with many sensors which are representative of an entry to the DARPA SubT challenge. The sensor modalities chosen represent a superset of typical configurations; this allows a team to experiment with various combinations to evaluate their applicability to the SubT challenge on their specific software.
The robot is equipped with an Ouster OS1-64 (3D LiDAR) , a FLIR Tau2 thermal IR camera, a Carnegie Robotics Multisense SL stereo camera + illuminators + spinning LiDAR, and a Microstrain GX5-25 IMU. The robot was also equipped with a Point Grey Chameleon which was a spare device and not used in this data collection. Data was saved onto an SSD in the computing payload.
The data sets were collected using ROS drivers for sensor components where available. Imagery was collected in compressed or compressedDepth format to reduce file sizes. These can be reconstructed to their raw form through the use of image_transport "republish" ROS nodes, or by using image_transport when subscribing to the topics.
Uses https://github.com/ouster-lidar/ouster_example driver. Has been modified to tag output with current system time (ros::Time::now()) instead of just using device timestamp, which starts at zero. The timestamp on the device was not set due to lack of supporting hardware on our part, which will be rectified in future collections. To reduce jitter, the offset between system time and device time is continuously estimated, and composed with the device time to get something closer which will only be offset by an unknown delay parameter.
We have recorded the ouster packets directly to support re-generating the clouds, perhaps with better estimates of time delay, if desired. In addition, the OS1 device generates its own internal IMU data, which would have the correct timestamps for the LiDAR points. We have recorded this but not used it yet. Finally, the point clouds are also recorded at 10 Hz.
Uses full driver stack provided by Carnegie Robotics. Device was calibrated at the factory. We attempted to capture all relevant topics and calibration data.
We have provided the thermal IR data from this sensor, on the topic
cv_camera/image_raw/compressed.
Intrinsic calibration of this sensor was not performed, so the camera_info message should not be used as is. An interested user could attempt to calibrate by looking at common features between the thermal IR image and the Multisense SL, such as lights whcih show up in both.
Raw microstrain imu data is recorded. This is also incorporated with the platform's odometry (which is also recorded separately) into a gvrbot/odom to gvrbot/base (base_link?) frame. This can be stripped out if desired through the use of the tf_hijacker node which is provided in the bitbucket site at https://bitbucket.org/subtchallenge/subt_reference_datasets/src/master/. This project also contains helpful launch files which can be used to run these bag files.
We were running our own mapping system while collecting this data, which results in the TF tree containing a map to gvrbot/odom frame. When evaluating your own mapping system, this frame will need to be stripped through the use of the tf_hijacker node, which is provided in the bitbucket site.
The FLIR data cuts out near the end of the long loop bag file.
Extrinsic calibration of sensor positions is quite rough and might be insufficient to generate really accurate maps. Sufficiently motivated parties could use the tf_hijacker node to remove inaccurate transforms which could then be re-inserted through the use of a tf2_ros/static_transform_publisher.
- subt_edgar_hires_2019-04-11-13-31-25.bag
-
Description: Main loop plus drilling museum. Total length ~26 minutes
-
Problems:
-
FLIR cuts out at 959 seconds out of 1599 seconds of total run. This means that we didn't see the last Rescue Randy near the ARMY entrance. The robot was in the paved concrete branch off of the ARMY tunnel. FLIR also would have been useful to see at least one more cell phone at the ARMY tunnel
-
Got a good look at the ARMY tunnel entrance gate as well as the initial MIAMI tunnel entrance, but the bag file stops short of re-observing the MIAMI tunnel and getting back in to close the loop. A team was setting up for their run by the time we got back to the MIAMI staging area.
-
- Smoke tests
-
subt_edgar_hires_2019-04-12-15-46-54.bag
- Has FLIR, starts just outside the smoke and makes an approach to the "survivor".
-
subt_edgar_hires_2019-04-12-15-52-44.bag
- No FLIR data in this run, but still has other sensors. Sees survivor a few more times
- Dust tests:
- Three bag files here with good FLIR and all sensors working correctly. These bag files are taken at the steep incline at the back on the MIAMI tunnel. The robot follows closely behind a person who is kicking up a lot of dust into the air.