Safety monitoring in a warehouse

[ Nodar Ground Truth ]

Transform 2D video into high-resolution, long-range 3D Ground Truth

Scalable, cloud-based platform for rapidly generating high-fidelity ground truth data sets for training AI perception systems.

Safety monitoring in a warehouse

[ Nodar Ground Truth ]

Transform 2D video into high-resolution, long-range 3D Ground Truth

Scalable, cloud-based platform for rapidly generating high-fidelity ground truth data sets for training AI perception systems.

Safety monitoring in a warehouse

[ Nodar Ground Truth ]

Transform 2D video into high-resolution, long-range 3D Ground Truth

Scalable, cloud-based platform for rapidly generating high-fidelity ground truth data sets for training AI perception systems.

What's NODAR Ground Truth?

NODAR Ground Truth is an image labeling service for annotating the depth of every pixel in a camera image.

2-4x depth accuracy and 10-50x point cloud density of LiDAR.

Uses images from two cameras with overlapping views.

Processes both images as a stereo pair to generate a depth map.

Input: RGB (upper left); Outputs: Depth Map (upper right), 3D Pointcloud (bottom)

Key Features

Cloud-Based Processing

Transform raw video from 2 cameras into precise 3D data using our secure cloud-hosted platform optimized for scale.

Sub-Pixel Calibration

Patented algorithms calibrate and rectify each image-pair yielding exceptionally crisp and precise pointclouds, even for low-visibility and high-vibration environments.

ML-Ready Data

Generate dense 6D (xyzrgb) point clouds from raw 2D images - ideal for training perception systems.

Ultra-Long Range Depth from Wide-Baseline Stereo

Best-in-class stereo processing with support for all stereo camera baselines, uniquely enabling wide-baseline stereo for unmatched precision, greater accuracy, and industry-leading range.

Key Use Cases

Train Monocular Depth Networks

Generate dense training data to improve monocular depth estimation performance across diverse environments.

Ground Truth for Localization and Mapping

Create accurate base maps and precise spatial references for testing and validating localization algorithms in complex environments.

Train Stereo Vision Networks

Refine stereo algorithms with high-quality ground truth data that matches your exact camera configuration.

End-to-End Autonomous Driving Networks

Train comprehensive perception systems with consistent, aligned data for improved decision-making.

Why NODAR GroundTruth?

Superior Ground Truth for Monocular Depth Learning

Ultra-wide baseline stereo delivers dense, high-res point clouds with <0.5% depth error at 100m.

Ideal for supervising or fine-tuning monocular depth networks via supervised, semi-supervised, or distillation methods.

Outperforms sparse, misaligned, and range-limited LiDAR-based pseudo-ground truth.

RGB Aligned, Spatially and Temporally

Unlike LiDAR, which captures data asynchronously and sparsely, NODAR delivers dense, pixel-level xyz + rgb directly from the image plane.

No need for extrinsic calibration, sensor synchronization, or reprojection

Eliminates common ground truth failures from camera + LiDAR fusion.

Resolution & Density Matters

Stereo achieves megapixel-scale disparity with sub-pixel accuracy.

128-line LiDAR offers only 0.1–1% of stereo’s spatial density.

Stereo data can be used to train neural networks to handle scenarios where LiDAR often fails, like wet or reflective ground, transparent objects, dark materials (e.g., tires), and challenging weather conditions such as rain, fog, and dust.

Cost and Scalability

LiDAR ground truth capture requires multi-sensor rigs and hand-tuned fusion pipelines.

Stereo uses only passive RGB cameras, cheaper, simpler, and daylight-optimized.

High-quality ground truth at a fraction of the cost enables massive dataset generation for model training and validation.

Ideal for Domain Adaptation & Transfer Learning

Train monocular networks using stereo-derived depth across varied weather, lighting, and scenes.

Augment or validate synthetic pipelines and multi-modal architectures.

How It Works

Capture Video

Collect synchronized stereo video in the field using your cameras or our HDK.

Process Automatically

NODAR's Hammerhead stereo engine processes the data and generates depth maps.

Upload Securely

Transfer your captured footage to NODAR's cloud platform or an on-premise instance.

Download Results

Access your processed data via our secure portal or direct link in industry-standard formats ready for your ML pipeline.

Capture Video

Collect synchronized stereo video in the field using your cameras or our HDK.

Upload Securely

Transfer your captured footage to NODAR's cloud platform or an on-premise instance.

Process Automatically

NODAR's Hammerhead stereo engine processes the data and generates depth maps.

Download Results

Access your processed data via our secure portal or direct link in industry-standard formats ready for your ML pipeline.

How to Use NODAR GroundTruth

Use Your Own Setup (hosted or cloud)

Capture time-synchronized video using your existing stereo cameras and upload to our cloud platform for processing. Compatible with various camera configurations with baselines from 10cm to 3m. Also available as on-premise solution for privacy and security.

Use the NODAR HDK

Purchase NODAR's Hammerhead Development Kit for a complete, factory-calibrated stereo camera system that is optimized for groundtruth capture. Includes all hardware, sync cables, and mounting options.

Case Study

Ottometric uses NODAR Groundtruth to Validate ADAS System Performance

Ottometric is a cutting-edge company providing validation and analytics tools for ADAS and autonomous vehicle (AV) systems. Ottometric’s platform automates the time-intensive task of reviewing sensor and video data to ensure compliance with safety standards, enabling faster development cycles and higher confidence in system performance.

Challenge

As Ottometric continued to push the boundaries of AV validation, the company sought highly precise and long-range 3D groundtruth data to train, benchmark and evaluate their systems—especially in real-world traffic conditions.

Challenge

As Ottometric continued to push the boundaries of AV validation, the company sought highly precise and long-range 3D groundtruth data to train, benchmark and evaluate their systems—especially in real-world traffic conditions.

Challenge

As Ottometric continued to push the boundaries of AV validation, the company sought highly precise and long-range 3D groundtruth data to train, benchmark and evaluate their systems—especially in real-world traffic conditions.

Solution

Ottometric partnered with NODAR to support their validation efforts for an automotive OEM. For this project, Ottometric collected large amounts of 2D video data across a variety of scenarios using an ultra-wide baseline stereo vision system mounted on the vehicles (2 5mp cameras separated by 1.2m).NODAR’s cloud-based Groundtruth system was then used on Ottometric’s private cloud to securely process the terabytes of data, converting the 2D image-pairs into high-resolution 3D pointclouds with accurate range measurements beyond 200 meters. These data were then used to validate and train Ottometric’s customer’s ADAS systems, all while maintaining data security and privacy.

Solution

Ottometric partnered with NODAR to support their validation efforts for an automotive OEM. For this project, Ottometric collected large amounts of 2D video data across a variety of scenarios using an ultra-wide baseline stereo vision system mounted on the vehicles (2 5mp cameras separated by 1.2m).NODAR’s cloud-based Groundtruth system was then used on Ottometric’s private cloud to securely process the terabytes of data, converting the 2D image-pairs into high-resolution 3D pointclouds with accurate range measurements beyond 200 meters. These data were then used to validate and train Ottometric’s customer’s ADAS systems, all while maintaining data security and privacy.

Solution

Ottometric partnered with NODAR to support their validation efforts for an automotive OEM. For this project, Ottometric collected large amounts of 2D video data across a variety of scenarios using an ultra-wide baseline stereo vision system mounted on the vehicles (2 5mp cameras separated by 1.2m).NODAR’s cloud-based Groundtruth system was then used on Ottometric’s private cloud to securely process the terabytes of data, converting the 2D image-pairs into high-resolution 3D pointclouds with accurate range measurements beyond 200 meters. These data were then used to validate and train Ottometric’s customer’s ADAS systems, all while maintaining data security and privacy.

Results and Impact

NODAR’s system showed remarkable alignment between theoretical and actual depth precision, with errors as low as 0.05 meters for objects over 100 meters.

Flat traffic signs were consistently and accurately reconstructed, validating NODAR’s precision modeling.

Detections were achieved at extreme distances - up to 247 meters - showcasing the advantage of NODAR over high-resolution lidar and monocular camera approaches

Ottometric was able to use this high-fidelity ground truth data to refine their detection algorithms and enhance validation workflows.

Impact

The project demonstrated how standard cameras can be used in combination with NODAR Groundtruth to easily and rapidly create large amounts of highly accurate, long-range 3D pointcloud data that can then be used to train and validate AI systems for mobility applications. In contrast to data generated by LiDAR, using NODAR Groundtruth is roughly 10x higher resolution, 2-3x longer-range, more performant in adverse conditions, and a fraction of the cost.

Results and Impact

NODAR’s system showed remarkable alignment between theoretical and actual depth precision, with errors as low as 0.05 meters for objects over 100 meters.

Flat traffic signs were consistently and accurately reconstructed, validating NODAR’s precision modeling.

Detections were achieved at extreme distances - up to 247 meters - showcasing the advantage of NODAR over high-resolution lidar and monocular camera approaches

Ottometric was able to use this high-fidelity ground truth data to refine their detection algorithms and enhance validation workflows.

Impact

The project demonstrated how standard cameras can be used in combination with NODAR Groundtruth to easily and rapidly create large amounts of highly accurate, long-range 3D pointcloud data that can then be used to train and validate AI systems for mobility applications. In contrast to data generated by LiDAR, using NODAR Groundtruth is roughly 10x higher resolution, 2-3x longer-range, more performant in adverse conditions, and a fraction of the cost.

Results and Impact

NODAR’s system showed remarkable alignment between theoretical and actual depth precision, with errors as low as 0.05 meters for objects over 100 meters.

Flat traffic signs were consistently and accurately reconstructed, validating NODAR’s precision modeling.

Detections were achieved at extreme distances - up to 247 meters - showcasing the advantage of NODAR over high-resolution lidar and monocular camera approaches

Ottometric was able to use this high-fidelity ground truth data to refine their detection algorithms and enhance validation workflows.

Impact

The project demonstrated how standard cameras can be used in combination with NODAR Groundtruth to easily and rapidly create large amounts of highly accurate, long-range 3D pointcloud data that can then be used to train and validate AI systems for mobility applications. In contrast to data generated by LiDAR, using NODAR Groundtruth is roughly 10x higher resolution, 2-3x longer-range, more performant in adverse conditions, and a fraction of the cost.

Perfect for Non-Real-Time Applications

AV/ADAS R&D

Generate training data for autonomous vehicle perception systems

Simulation Validation

Compare real-world data with simulated environments for improved fidelity.

Construction & Mining

Create accurate volumetric measurements and structural analysis.

Safety Systems

Prototype and test safety features with high-precision depth data.

Infrastructure Monitoring

Track changes in infrastructure with millimeter precision.

Input Requirements

Time Synchronization

±100µs for dynamic scenes, ≤1ms for static

Camera Baseline

Supported from 10cm to 3m

Required Parameters

Intrinsics/extrinsics (baseline, theta, phi, Tx, Ty, distortion)

Supported Formats

MP4, AVI, raw formats with timestamps

Resolution

Up to 4K per camera

Talk to an Expert