Point cloud rendering of a car

[ solutions ]

TerraView

High-resolution 3D sensing for autonomous and driver-assisted mining operations. Built on the NODAR Hammerhead platform, hardened for the harshest environments on earth.

Real-time 3D vision for extreme environments

Autonomous mining operations demand 3D perception that works where other sensors fail: inside dense dust clouds, underground with no GPS, and on heavy equipment that shakes continuously. Standard sensors lose calibration, miss obstacles, and require frequent maintenance.

TerraView is a hardened, camera-based 3D perception platform built on NODAR's patented Hammerhead stereo vision technology. It delivers high-resolution depth and object detection at ranges up to 300 meters, with frame-by-frame autocalibration that holds accuracy through continuous vibration and shock -- above ground and below.

300m+

Detection Range

Dense 3D point cloud coverage

360°

Coverage Area

Full-surround sensing configuration

5cm

Accuracy at 30 m

High-precision depth at operational range

Engineered for the world's toughest conditions

Vibration-resilient calibration

Dynamic, frame-by-frame autocalibration corrects stereo alignment continuously under engine vibration and mechanical shock. No manual recalibration required.

Long-range obstacle detection

Dense 3D point clouds at ranges up to 300 meters detect people, vehicles, and debris well ahead of the equipment travel path.

360-degree coverage

Full-surround sensing configuration supports excavators, drillers, and haulage vehicles with no blind zones.

GPS-independent operation

Camera-based depth sensing requires no satellite signal, enabling reliable autonomy in underground and GPS-denied environments.

Hardened hardware

IP67-rated, solid-state design built for long-term deployment on heavy equipment with no moving parts to fail.

All-condition performance

Reliable in dust, heavy rain, low light, and high-temperature environments. Detects a 12cm object at 200m at 6.8 lux.

The TerraView processing pipeline

TerraView processes every frame through a complete pipeline, turning raw stereo imagery into actionable 3D perception in real time.

01

01

vibration resilience

Autocalibration

Patented per-frame algorithms correct stereo alignment continuously under vibration and thermal variation. No manual recalibration is required during operation, maintaining accuracy through continuous shock and movement.

02

02

depth computation

Stereo matching

Dense disparity maps computed at ranges up to 300 meters, generating a complete high-resolution 3D point cloud of the surrounding environment on every frame.

03

03

occupancy mapping

Obstacle detection

GPU-accelerated occupancy grid processing converts the point cloud into real-time free space and obstacle maps, tracking objects with coordinates and movement vectors.

04

04

Output

Complete dataset

Programmatic access to raw depth data, per-pixel per-frame confidence maps, and object lists with location and velocity. Delivered via ROS2, C++, or 10Gb Ethernet.

Depth Perception That Does Not Drift

Frame-by-frame autocalibration keeps TerraView accurate through continuous vibration, shock, and extreme temperatures -- no manual recalibration, no downtime.

Team picture
Team picture

From surface to underground

TerraView is purpose-built for the full range of mining operations, from open-pit haulage to GPS-denied underground environments.

Autonomous haulage

Reliable path planning and obstacle avoidance for autonomous trucks and loaders operating across open-pit sites.

Underground mining automation

3D sensing independent of GPS or LiDAR, purpose-built for confined, GPS-denied underground environments.

Collision avoidance

Detects people, vehicles, and site debris at long range, triggering warnings before equipment travel paths intersect.

Safety monitoring

360-degree 3D awareness around excavators and drillers reduces operator blind spots and supports remote monitoring.

Terrain mapping

Dense point clouds capture site topography and changing ground conditions for planning and situational awareness.

Detection icon
Detection icon

Occupancy tracking

Continuous object tracking with coordinates and movement vectors supports site-wide traffic management and safety enforcement.

Technical Specifications

System

Compute

NVIDIA GPU support

Interface

ROS2, C++

Networking

10Gb Ethernet or GSML2

Coverage area

Up to 360°

Alignment

+/- 3 deg

Baseline

Tested 10cm to 3m

Camera Resolution

0.5MP – 20MP

performance

Low light detection

12cm object at 200m @ 6.8 lux

Frames per second

Tested to 20fps

Accuracy

5cm at 30m (0.17%)

Calibration

Every frame on natural scenes

Output format

Depth Map, Point Cloud, BEV

Output metrics

Color, Depth, Velocity

Sensor health

Per-pixel confidence map