
[ Nodar cloud ]
Stereo Vision as a Service
Transform 2D video into dense 3D depth
Cloud-based processing to match your
development and AI training needs.
What's NODAR Cloud?
It is an image labeling service for annotating the depth of every pixel in a camera image.
2-4x depth accuracy and 10-50x point cloud density of LiDAR.
Uses images from two cameras with overlapping views.
Processes both images as a stereo pair to generate a depth map.
Input: RGB (upper left); Outputs: Depth Map (upper right), 3D Pointcloud (bottom)
Key Features
Cloud-Based Processing
Transform raw video from 2 cameras into precise 3D data using our secure cloud-hosted platform optimized for scale.
Sub-Pixel Calibration
Patented algorithms calibrate and rectify each image-pair yielding exceptionally crisp and precise pointclouds, even for low-visibility and high-vibration environments.
ML-Ready Data
Generate dense 6D (xyzrgb) point clouds from raw 2D images - ideal for training perception systems.
Ultra-Long Range Depth from Wide-Baseline Stereo
Best-in-class stereo processing with support for all stereo camera baselines, uniquely enabling wide-baseline stereo for unmatched precision, greater accuracy, and industry-leading range.
Key Use Cases
Train Monocular Depth Networks
Generate dense training data to improve monocular depth estimation performance across diverse environments.
Ground Truth for Localization and Mapping
Create accurate base maps and precise spatial references for testing and validating localization algorithms in complex environments.
Train Stereo Vision Networks
Refine stereo algorithms with high-quality ground truth data that matches your exact camera configuration.
End-to-End Autonomous Driving Networks
Train comprehensive perception systems with consistent, aligned data for improved decision-making.
Real-Time Processing
Evaluate Hammerhead in Real Time
Test the live performance of NODAR’s stereo engine for AV/ADAS development and system integration.
Near Real-Time 3D for Monitoring
Generate high-quality point clouds for security, surveillance, and remote monitoring.
Real-Time Processing
For Testing and Evaluation of Stereo Performance in Autonomous Systems
Immediate 3D output from synchronized video
Ideal for AV/ADAS R&D, prototyping, edge-case testing
Evaluate the performance of NODAR Hammerhead stereo engine in real-world conditions
Ground Truth Processing
High-Fidelity, Offline Depth Data Solutions for AI Training and Validation
Dense, pixel-aligned point clouds from raw stereo video
Ultra-long range with <0.5% depth error at 100m
Ideal for AI training, perception validation, 3D mapping
Upload at scale & receive ML-ready outputs from cloud
Why NODAR Cloud?
Superior Ground Truth for Monocular Depth Learning
Ultra-wide baseline stereo delivers dense, high-res point clouds with <0.5% depth error at 100m.
Ideal for supervising or fine-tuning monocular depth networks via supervised, semi-supervised, or distillation methods.
Outperforms sparse, misaligned, and range-limited LiDAR-based pseudo-ground truth.
RGB Aligned, Spatially and Temporally
Unlike LiDAR, which captures data asynchronously and sparsely, NODAR delivers dense, pixel-level xyz + rgb directly from the image plane.
No need for extrinsic calibration, sensor synchronization, or reprojection
Eliminates common ground truth failures from camera + LiDAR fusion.
Resolution & Density Matters
Stereo achieves megapixel-scale disparity with sub-pixel accuracy.
128-line LiDAR offers only 0.1–1% of stereo’s spatial density.
Stereo data can be used to train neural networks to handle scenarios where LiDAR often fails, like wet or reflective ground, transparent objects, dark materials (e.g., tires), and challenging weather conditions such as rain, fog, and dust.
Cost and Scalability
LiDAR ground truth capture requires multi-sensor rigs and hand-tuned fusion pipelines.
Stereo uses only passive RGB cameras, cheaper, simpler, and daylight-optimized.
High-quality ground truth at a fraction of the cost enables massive dataset generation for model training and validation.
Ideal for Domain Adaptation & Transfer Learning
Train monocular networks using stereo-derived depth across varied weather, lighting, and scenes.
Augment or validate synthetic pipelines and multi-modal architectures.
How It Works
How to Use NODAR Cloud
Use Your Own Setup (hosted or cloud)
Capture time-synchronized video using your existing stereo cameras and upload to our cloud platform for processing. Compatible with various camera configurations with baselines from 10cm to 3m. Also available as on-premise solution for privacy and security.
Use the NODAR HDK
Purchase NODAR's Hammerhead Development Kit for a complete, factory-calibrated stereo camera system that is optimized for groundtruth capture. Includes all hardware, sync cables, and mounting options.
Case Study
Ottometric uses NODAR Cloud to Validate ADAS System Performance
Ottometric is a cutting-edge company providing validation and analytics tools for ADAS and autonomous vehicle (AV) systems. Ottometric’s platform automates the time-intensive task of reviewing sensor and video data to ensure compliance with safety standards, enabling faster development cycles and higher confidence in system performance.
Challenge
As Ottometric continued to push the boundaries of AV validation, the company sought highly precise and long-range 3D groundtruth data to train, benchmark and evaluate their systems—especially in real-world traffic conditions.
Solution
Ottometric partnered with NODAR to support their validation efforts for an automotive OEM. For this project, Ottometric collected large amounts of 2D video data across a variety of scenarios using an ultra-wide baseline stereo vision system mounted on the vehicles (2 5mp cameras separated by 1.2m).NODAR’s cloud-based Groundtruth system was then used on Ottometric’s private cloud to securely process the terabytes of data, converting the 2D image-pairs into high-resolution 3D pointclouds with accurate range measurements beyond 200 meters. These data were then used to validate and train Ottometric’s customer’s ADAS systems, all while maintaining data security and privacy.
Results and Impact
NODAR’s system showed remarkable alignment between theoretical and actual depth precision, with errors as low as 0.05 meters for objects over 100 meters.
Flat traffic signs were consistently and accurately reconstructed, validating NODAR’s precision modeling.
Detections were achieved at extreme distances - up to 247 meters - showcasing the advantage of NODAR over high-resolution lidar and monocular camera approaches
Ottometric was able to use this high-fidelity ground truth data to refine their detection algorithms and enhance validation workflows.
Impact
The project demonstrated how standard cameras can be used in combination with NODAR Groundtruth to easily and rapidly create large amounts of highly accurate, long-range 3D pointcloud data that can then be used to train and validate AI systems for mobility applications. In contrast to data generated by LiDAR, using NODAR Groundtruth is roughly 10x higher resolution, 2-3x longer-range, more performant in adverse conditions, and a fraction of the cost.
Perfect for Non-Real-Time Applications
Input Requirements
Time Synchronization
±100µs for dynamic scenes, ≤1ms for static
Camera Baseline
Supported from 10cm to 3m
Required Parameters
Intrinsics/extrinsics (baseline, theta, phi, Tx, Ty, distortion)
Supported Formats
MP4, AVI, raw formats with timestamps
Resolution
Up to 4K per camera







