What's NODAR Ground Truth?
NODAR Ground Truth is an image labeling service for annotating the depth of every pixel in a camera image.
2-4x depth accuracy and 10-50x point cloud density of LiDAR.
Uses images from two cameras with overlapping views.
Processes both images as a stereo pair to generate a depth map.
Input: RGB (upper left); Outputs: Depth Map (upper right), 3D Pointcloud (bottom)
Key Features
Cloud-Based Processing
Transform raw video from 2 cameras into precise 3D data using our secure cloud-hosted platform optimized for scale.
Sub-Pixel Calibration
Patented algorithms calibrate and rectify each image-pair yielding exceptionally crisp and precise pointclouds, even for low-visibility and high-vibration environments.
ML-Ready Data
Generate dense 6D (xyzrgb) point clouds from raw 2D images - ideal for training perception systems.
Ultra-Long Range Depth from Wide-Baseline Stereo
Best-in-class stereo processing with support for all stereo camera baselines, uniquely enabling wide-baseline stereo for unmatched precision, greater accuracy, and industry-leading range.
Key Use Cases
Train Monocular Depth Networks
Generate dense training data to improve monocular depth estimation performance across diverse environments.
Ground Truth for Localization and Mapping
Create accurate base maps and precise spatial references for testing and validating localization algorithms in complex environments.
Train Stereo Vision Networks
Refine stereo algorithms with high-quality ground truth data that matches your exact camera configuration.
End-to-End Autonomous Driving Networks
Train comprehensive perception systems with consistent, aligned data for improved decision-making.
Why NODAR GroundTruth?
Superior Ground Truth for Monocular Depth Learning
Ultra-wide baseline stereo delivers dense, high-res point clouds with <0.5% depth error at 100m.
Ideal for supervising or fine-tuning monocular depth networks via supervised, semi-supervised, or distillation methods.
Outperforms sparse, misaligned, and range-limited LiDAR-based pseudo-ground truth.
RGB Aligned, Spatially and Temporally
Unlike LiDAR, which captures data asynchronously and sparsely, NODAR delivers dense, pixel-level xyz + rgb directly from the image plane.
No need for extrinsic calibration, sensor synchronization, or reprojection
Eliminates common ground truth failures from camera + LiDAR fusion.
Resolution & Density Matters
Stereo achieves megapixel-scale disparity with sub-pixel accuracy.
128-line LiDAR offers only 0.1–1% of stereo’s spatial density.
Stereo data can be used to train neural networks to handle scenarios where LiDAR often fails, like wet or reflective ground, transparent objects, dark materials (e.g., tires), and challenging weather conditions such as rain, fog, and dust.
Cost and Scalability
LiDAR ground truth capture requires multi-sensor rigs and hand-tuned fusion pipelines.
Stereo uses only passive RGB cameras, cheaper, simpler, and daylight-optimized.
High-quality ground truth at a fraction of the cost enables massive dataset generation for model training and validation.
Ideal for Domain Adaptation & Transfer Learning
Train monocular networks using stereo-derived depth across varied weather, lighting, and scenes.
Augment or validate synthetic pipelines and multi-modal architectures.
How It Works
How to Use NODAR GroundTruth
Use Your Own Setup (hosted or cloud)
Capture time-synchronized video using your existing stereo cameras and upload to our cloud platform for processing. Compatible with various camera configurations with baselines from 10cm to 3m. Also available as on-premise solution for privacy and security.
Use the NODAR HDK
Purchase NODAR's Hammerhead Development Kit for a complete, factory-calibrated stereo camera system that is optimized for groundtruth capture. Includes all hardware, sync cables, and mounting options.
Case Study
Ottometric uses NODAR Groundtruth to Validate ADAS System Performance
Ottometric is a cutting-edge company providing validation and analytics tools for ADAS and autonomous vehicle (AV) systems. Ottometric’s platform automates the time-intensive task of reviewing sensor and video data to ensure compliance with safety standards, enabling faster development cycles and higher confidence in system performance.
Perfect for Non-Real-Time Applications
Input Requirements
Time Synchronization
±100µs for dynamic scenes, ≤1ms for static
Camera Baseline
Supported from 10cm to 3m
Required Parameters
Intrinsics/extrinsics (baseline, theta, phi, Tx, Ty, distortion)
Supported Formats
MP4, AVI, raw formats with timestamps
Resolution
Up to 4K per camera