Skip to content
This video shows an ideal calibration maneuver: several loop closures in a figure-8 pattern, with nearby static structure that is tall enough to fill the cameras' fields of view.

Capturing data

tl;dr

record data while moving the robot in a figure 8 near tall structure for 30-60 seconds

record every message from every sensor (such as with rosbag record -a)

Data Format

Storage: Save data in ROS1 .bag files and/or ROS2 .mcap or .db3 files.

Message types: Use standard ROS message types whenever possible.

Rosbag requirements: Split bag files are acceptable. If multiple ROS bags are submitted, they should not contain duplicates of the same message (this will happen if multiple instances of rosbag record are subscribed to the same topic).

If you don't use ROS, get in touch with us about alternative data formats we support.

Cameras

Image Rate: Aim for >5Hz.

Image Format: Compressed RGB images (CompressedImage) are preferred, without processing besides compression.

Resolution: If possible, provide images at the sensor's full resolution.

Depth cameras

Dot projection off: You are required to turn off dot projection for depth camera images.

Range awareness: Depth cameras produce the best data within only a short range, so it's very important that they see nearby (<1m away) tall static structure during data collection.

Lidars

Capture data in as raw a form as possible: You are required to turn off any pre-transforming of lidar point clouds or motion compensation. MSA will ingest the raw points reported by the sensor.

Prefer the lidar's native output: If your lidar produces raw packets, record and send those if possible. Also include any topics that contain lidar configuration or intrinsics information.

Per-point timestamps: All major lidar manufacturers support per-point timestamps. The easiest way to get them is to record the raw output of the lidar. This may require some configuration of the driver.

Wheel Encoders

Since there is no ROS standard message for wheel encoders, we support three formats:

  1. (Preferred): Use a JointState message with position or velocity populated, including timestamps.
  2. Separate messages: Send two Float64 messages, one for each wheel.
  3. Array format: Send a single Float64Array message with two elements, representing each wheel.

We prefer tick data when available. If ticks aren’t accessible, provide wheel speeds instead.

Compression

RGB Camera Images: Optionally, compress images into CompressedImage ROS messages.

All Other Data: Store uncompressed.

Timestamps

Calibration requires a timestamp on every sensor measurement. Calibration is most useful when these timestamps are accurate. More specifically:

Required: every measurement must have a timestamp: This includes all camera images, individual lidar points, depth images, GPS samples, IMU samples, wheel encoder ticks and speeds, and radar points.

  • All major lidar manufacturers' drivers support per-point timestamping, but some configuration may be required to include per-point timestamping in the ROS PointCloud2 message. If available, it is preferable for you to provide the raw lidar packets from the lidar. Please also store any lidar intrinsics messages.

Strongly recommended: timestamp using time-of-measurement: Use timestamps that indicate when the measurement was taken, not when the data arrived in the log.

Optional: rough time synchronization: if possible, timestamps should all be generated using clocks that are nearly in sync. This means that the clocks should report the same time to within about 200ms. This makes time offset calibration possible.

  • Time offset calibration is not meaningful if sensors are recording in different time domains (e.g. one clock is unix time and another is time-since-startup). In order to enable time-offset calibration for a sensor, MSA requires rough time synchronization.
  • This isn't PTP hardware time sync, which achieves much better than 200ms of difference between clocks. You are welcome to use PTP.

TF Messages

Optional: TF tree: if you have one available, include a TF tree that has roughly correct extrinsics for all sensors. If some of them are totally wrong or missing, that's OK. The TF tree is simply an aid to a human who will carefully review your first data upload. It's helpful to have a starting point for understanding the sensor layout. During subsequent calibrations, any TF data is ignored.

Calibration motion: move the robot close to tall structure

Capture data as follows:

  1. Start recording with the system stationary, positioned very close to tall static structure that fills the field of view of some sensors
    • Ideally start within 1m of the static structure
    • Ideal static structure is textured, including:
      • Brick walls, door frames
      • Trees
      • Furniture
      • Pallets or shelving with boxes
      • Stationary vehicles or robots
  2. Move the system in two figure-8 patterns
    • Stay as close as you can to tall static structure the whole time!
    • If the system is too large to move in a figure-8 pattern, or constrained by the environment, perform two three-point turns

Tips:

  • Move the system manually, or via teleoperation, or autonomously.
  • Ensure operators don’t fully obscure any sensor's view.
  • Move quickly enough to complete all motions within about 60 seconds.
    • A walking or jogging pace is usually appropriate for smaller robots, while a parking-lot-appropriate driving speed works well for larger robots.

This procedure can be heavily modified to suit your needs. Contact us to discuss complicated scenarios.

Recording length

30-60 seconds: Minimize the recording length as much as possible. Start the recording immediately before starting the movement. Stop the recording as soon as the movement has completed. Try to complete the movement in less than 60 seconds.

Check quality

Before submitting data

  • check that each sensor is represented by a topic
  • check that each topic contains messages at the expected rate

If you are using ROS, rosbag info is useful for quality checks.