[ nodar sdk ]

Software Development Kit

Download. Integrate. Deploy.

Build high-performance 3D perception solutions in minutes with NODAR's production-ready stereo engine and your own cameras.

Download Your Free 14-Day Trial
Download Your Free 14-Day Trial
NODAR HDK hardware
NODAR HDK hardware
NODAR HDK hardware

State of the art automotive lidar vs NODAR SDK

State of the art automotive lidar vs NODAR SDK

State of the art automotive lidar vs NODAR SDK

Choose a license or download a free trial and start integrating today.

Production pricing available — with licensing starting as low as $150/stereo camera.


What's Included

  • Patented calibration and stereo matching technology for unparalleled accuracy and long-range stereo camera sensing

  • Occupancy map and object detection GridDetect software (optional)

  • Precompiled Hammerhead SDK for x86 and ARM

  • NODAR Viewer for depth and point-cloud visualization

  • C++ and Python APIs

  • Full developer documentation

Superhuman vision.
Unmatched precision.

Superhuman vision.
Unmatched precision.

Superhuman vision.
Unmatched precision.

The NODAR SDK transforms any pair of cameras into a high-performance 3D sensor for autonomous perception with industry-leading range, resolution, and reliability.

NODAR transforms any pair of cameras into a high-performance 3D sensor for autonomous perception with industry-leading range, resolution, and reliability.

The NODAR SDK transforms any pair of cameras into a high-performance 3D sensor for autonomous perception with industry-leading range, resolution, and reliability.

Inputs

Your Existing Cameras

Left Image

Right Image

NODAR SDK

Autocalibration

Compensating for camera vibrations every frame with subpixel accuracy

Stereo Matching

Geometric calculation of depth for every pixel

Occupancy Mapping

Particle filter and fusion layer

Your Computer

(NVIDIA Orin, x86+NVIDIA GPU)

Outputs

Point Cloud (RGBXYZ)

Depth Maps

Point Cloud (RGBXYZ)

Velocity

Recitifed Images

Object List

Occupancy Map

Display your outputs in real time

Download

Integrate

Deploy

Developer Resources

All the resources, tools, and support needed to integrate our cutting-edge stereo vision software into your hardware solutions. Whether you’re building autonomous systems or innovative imaging products, we’re here to help you create the future.

Testimonials

If someone showed us a lidar with such performance, it would be a no-brainer to move forward with integration.

Global Tractor Supplier

In a head-to-head trial with all other multi-camera competitors, NODAR provides the most accurate and only real-time wide-baseline stereo vision system. All other systems compared to NODAR were post-processed due to the high computational load.

Global Automotive OEM

We have been working on stereo for decades and your system is impressive.

Global Automotive Tier 1 Supplier

Applications

Trucks

Ultra long-range, small object detection - detect a 25cm object at 305m to allow for time to stop or avoid.

Passenger Cars

Precise long-range object detection for L3 and above.

Air Mobility

Long-range detection in any direction improves safety.

Heavy Equipment

Resilience to dust and high-vibration, and best-in-class performance in low visibility.

Ferry

Robustness with moving horizon and ultra-fine precision at short distances for docking.

Rail

1000m+ range obstruction detection for collision avoidance and mapping.

Last Mile Delivery

Low-cost, flexible camera mounting, low compute and fine precision - perfect for mobile robots.

Robotaxi

Sensor fusion with lidar provides redundancy while enabling safe highway driving.

Agriculture

Resilience to dust and vibration for autonomous operation, and precision for spout alignment.

NODAR SDK Benefits

The NODAR SDK provides accurate, reliable depth perception in real-world conditions including dust, low light, rain, and fog. Powered by patented auto-calibration algorithms, it transforms stereo image inputs from existing cameras into consistent, high-resolution depth data with minimal setup, enabling scalable, cost-effective integration across diverse platforms and use cases.

Accuracy and Range

High accuracy by computing the true depth of every pixel based on stereoscopic parallax rather than crudely interpolating from known objects.

Low Visibility Performance

High performance in difficult conditions such as high-dust, low-light, rain, fog, and direct sunlight.

Consistency of Results

Patented auto-calibration technology actively compensates for camera perturbations due to wind, road shock, vibration, and temperature.

Ruggedness and Reliability

High longer-term reliability due to continuous calibration.

Easy Installation

Quick setup with a single software installer and thorough documentation.

Value

No need for expensive new hardware - the SDK can be used with existing cameras.

System Requirements

Camera support

Tested between 1MP and 8MP, RGB, LWIR, global and rolling shutter

Lenses

Tested for 5 - 180 degree FOV

compute hardware

NVIDIA GPU (for example, RTX 4090, Geforce 3080, Jetson Orin)

Processor

ARM or x86¹

Specifications

Calibration

Continuous (every frame)

Baseline

Tested up to 3m

Field of view

Tested between 5 and 180 degrees

API

ZMQ and ROS2

Precision

0.2 pixel in disparity

Camera alignment tolerance

+/-3°

Stereo matching algorithm

Deterministic signal processing algorithm

Vibration tolerance

10 pixels per frame

1. Software-only technology enables porting to Intel+GPU or other processors

Frequently Asked Questions

How is NODAR different?

NODAR’s Hammerhead technology enables unparalleled detection of small objects at large distances by supporting large-baseline mounts (0.5-3m or larger) with independently mounted (no connecting bar), high-resolution cameras. This is accomplished with patented algorithms that automatically calibrate in real-time to compensate for misalignments that arise from larger baselines.

Does Hammerhead work in low visibility?

With recent improvements in camera sensors, Hammerhead is effective in low-light conditions, such as city streets or headlight-illuminated scenes at night. We have also tested well against LiDAR in dusty, rainy, and foggy conditions, and Hammerhead technology can also be adapted to work with infrared or thermal cameras.

What is included in Hammerhead’s HDK 2.0?

The HDK includes both hardware and software to evaluate and integrate with Hammerhead. The hardware is comprised of an Electronic Control Unit (ECU), 2 cameras, and all necessary connection cables. The software includes applications for demoing Hammerhead, collecting data, calibrating the initial camera setup, and an integration API.

How long does the installation take?

The HDK is designed to work immediately out of the box. You would only need to connect a power source and a display to see Hammerhead technology in action.

How does calibration work?

Stereo cameras are known to lose alignment due to vehicle vibration, temperature fluctuations, and long-term, small component movements. Using unique patented technology, Hammerhead automatically calibrates on every video frame to compensate for these alignment variations.

What mounting configurations are supported for HDK 2.0?

The entire HDK camera assembly can be mounted to a vehicle or structure via tripod mounting holes. Even though the cameras in the HDK are mounted at a fixed baseline, production versions of Hammerhead can be shipped with custom baselines and mounts depending on specific application requirements.

What is the HDK's camera resolution?

As shipped with the HDK, a 5-megapixel resolution is supported. If desired, additional resolutions can also be supported with different camera hardware.

What computing platforms are supported by the SDK?

The SDK is provided as prebuilt .deb packages for the following configurations:

  • Ubuntu 20.04
    CUDA 11.4 (AMD64, ARM64)
    CUDA 12.0 (AMD64)

  • Ubuntu 22.04
    CUDA 12.1, 12.2, 12.3, 12.6, 13.0 (AMD64)
    CUDA 12.2, 12.6 (ARM64)

  • Ubuntu 24.04
    CUDA 12.9, 13.0 (AMD64)

Typical performance (5.4 MP images)

  • Jetson Orin AGX: ~5–10 fps

  • NVIDIA RTX A5500 (Laptop): ~15–20 fps

  • NVIDIA GeForce RTX 4090 (Desktop): ~20–25 fps

Performance varies based on configuration, including image resolution (1–8 MP), bit depth (8-bit vs. 16-bit), and whether optional modules such as GridDetect are enabled.

As a general guideline:

  • Modern laptops typically achieve ~15–20 fps

  • Desktop systems with high-end GPUs typically achieve ~20–30 fps

What are the minimum recommended compute specs for the SDK?

We currently require an NVIDIA GPU. Our binaries rely on Cuda and target Ubuntu 20.04, 22.04, and 24.04 for ARM and AMD64 (Intel and AMD Cpus). For a complete list of supported systems, please visit https://docs.nodarsensor.net/index.html#supported-systems

What is Hammerhead's maximum supported distance?

Hammerhead can detect objects at large distances. With the 16mm (30° field of view) lens option, humans are clearly detectable at 500m. Even with the wider 7mm (65° field-of-view) lens option, humans are clearly detectable at 200m.

How is software integration done for HDK 2.0?

The HDK ships with ROS2 and C++ APIs, along with thorough documentation at https://github.com/nodarhub. Other integration options can be provided as a custom effort.

What type of support is available?

The HDK includes access to a support portal and a dedicated support email address. We aim to reply to critical issues within 24 hours. The SDK includes 12 months of software updates and full developer documentation. For premium support options, contact support@nodarsensor.com.

Can I input stereo data we’ve collected using other hardware?

The SDK provides occupied grid coordinates and depth. For object classes (car, motorcycle, etc), our customers have had success obtaining those by applying YOLO to the rectified images.

Which computing platforms are supported by HDK 2.0?

The HDK includes an NVIDIA Orin processing unit. Hammerhead can also be ported to other processors as a custom project.

What is included in NODAR’s SDK?

The SDK includes all the software needed to evaluate and integrate with Hammerhead. The software includes applications for demoing Hammerhead, collecting data, calibrating the initial camera setup, and C++ and Python APIs. The NODAR Viewer is provided for depth and point-cloud visualization. Our GridDetect occupancy map software is available as an add-on option.

Why should I include GridDetect with the SDK?

GridDetect is a high-performance, GPU-accelerated implementation of a deterministic particle filter algorithm for occupancy grid creation. It converts dense 3D point-cloud data into a robust, real-time understanding of free space and obstacles. 

In practical terms, GridDetect adds three key capabilities to the SDK:

  • High-throughput performance
    Processes up to 100 million 3D points per second, enabling real-time operation with dense, long-range stereo data.

  • Robust ground removal
    Effectively separates ground from obstacles, supporting detection of objects as small as a 15 cm brick at 150 m on a highway, as well as subtle features like emerging crops on uneven agricultural terrain.

  • Terrain-aware reasoning
    Correctly handles slopes, hills, and ramps without misclassifying them as obstacles.

Together, these capabilities allow developers to move beyond raw depth data and achieve stable, long-range obstacle detection suitable for automotive, agricultural, and industrial autonomy applications.

What cameras models are supported by the SDK?

The NODAR SDK is camera-agnostic. We have tested rolling shutter and global shutter RGB cameras, LWIR cameras, and resolutions from 1MP to 8MP.
While any camera is compatible, optimal performance is achieved with synchronized cameras that have overlapping fields of view and provide uncompressed images. The system supports native resolutions up to 8MP, but this is subject to available GPU memory; higher resolutions will require downsampling.

Does the SDK output classify specific objects, like a car, or the occupied grid coordinates and depth?

GridDetect is a high-performance, GPU-accelerated implementation of a deterministic particle filter algorithm for occupancy grid creation. It converts dense 3D point-cloud data into a robust, real-time understanding of free space and obstacles. 

In practical terms, GridDetect adds three key capabilities to the SDK:

  • High-throughput performance
    Processes up to 100 million 3D points per second, enabling real-time operation with dense, long-range stereo data.

  • Robust ground removal
    Effectively separates ground from obstacles, supporting detection of objects as small as a 15 cm brick at 150 m on a highway, as well as subtle features like emerging crops on uneven agricultural terrain.

  • Terrain-aware reasoning
    Correctly handles slopes, hills, and ramps without misclassifying them as obstacles.

Together, these capabilities allow developers to move beyond raw depth data and achieve stable, long-range obstacle detection suitable for automotive, agricultural, and industrial autonomy applications.

How is NODAR different?

NODAR’s Hammerhead technology enables unparalleled detection of small objects at large distances by supporting large-baseline mounts (0.5-3m or larger) with independently mounted (no connecting bar), high-resolution cameras. This is accomplished with patented algorithms that automatically calibrate in real-time to compensate for misalignments that arise from larger baselines.

Does Hammerhead work in low visibility?

With recent improvements in camera sensors, Hammerhead is effective in low-light conditions, such as city streets or headlight-illuminated scenes at night. We have also tested well against LiDAR in dusty, rainy, and foggy conditions, and Hammerhead technology can also be adapted to work with infrared or thermal cameras.

What is included in Hammerhead’s HDK 2.0?

The HDK includes both hardware and software to evaluate and integrate with Hammerhead. The hardware is comprised of an Electronic Control Unit (ECU), 2 cameras, and all necessary connection cables. The software includes applications for demoing Hammerhead, collecting data, calibrating the initial camera setup, and an integration API.

How long does the installation take?

The HDK is designed to work immediately out of the box. You would only need to connect a power source and a display to see Hammerhead technology in action.

How does calibration work?

Stereo cameras are known to lose alignment due to vehicle vibration, temperature fluctuations, and long-term, small component movements. Using unique patented technology, Hammerhead automatically calibrates on every video frame to compensate for these alignment variations.

What mounting configurations are supported for HDK 2.0?

The entire HDK camera assembly can be mounted to a vehicle or structure via tripod mounting holes. Even though the cameras in the HDK are mounted at a fixed baseline, production versions of Hammerhead can be shipped with custom baselines and mounts depending on specific application requirements.

What is the HDK's camera resolution?

As shipped with the HDK, a 5-megapixel resolution is supported. If desired, additional resolutions can also be supported with different camera hardware.

What computing platforms are supported by the SDK?

The SDK is provided as prebuilt .deb packages for the following configurations:

  • Ubuntu 20.04
    CUDA 11.4 (AMD64, ARM64)
    CUDA 12.0 (AMD64)

  • Ubuntu 22.04
    CUDA 12.1, 12.2, 12.3, 12.6, 13.0 (AMD64)
    CUDA 12.2, 12.6 (ARM64)

  • Ubuntu 24.04
    CUDA 12.9, 13.0 (AMD64)

Typical performance (5.4 MP images)

  • Jetson Orin AGX: ~5–10 fps

  • NVIDIA RTX A5500 (Laptop): ~15–20 fps

  • NVIDIA GeForce RTX 4090 (Desktop): ~20–25 fps

Performance varies based on configuration, including image resolution (1–8 MP), bit depth (8-bit vs. 16-bit), and whether optional modules such as GridDetect are enabled.

As a general guideline:

  • Modern laptops typically achieve ~15–20 fps

  • Desktop systems with high-end GPUs typically achieve ~20–30 fps

What are the minimum recommended compute specs for the SDK?

We currently require an NVIDIA GPU. Our binaries rely on Cuda and target Ubuntu 20.04, 22.04, and 24.04 for ARM and AMD64 (Intel and AMD Cpus). For a complete list of supported systems, please visit https://docs.nodarsensor.net/index.html#supported-systems

What is Hammerhead's maximum supported distance?

Hammerhead can detect objects at large distances. With the 16mm (30° field of view) lens option, humans are clearly detectable at 500m. Even with the wider 7mm (65° field-of-view) lens option, humans are clearly detectable at 200m.

How is software integration done for HDK 2.0?

The HDK ships with ROS2 and C++ APIs, along with thorough documentation at https://github.com/nodarhub. Other integration options can be provided as a custom effort.

What type of support is available?

The HDK includes access to a support portal and a dedicated support email address. We aim to reply to critical issues within 24 hours. The SDK includes 12 months of software updates and full developer documentation. For premium support options, contact support@nodarsensor.com.

Can I input stereo data we’ve collected using other hardware?

The SDK provides occupied grid coordinates and depth. For object classes (car, motorcycle, etc), our customers have had success obtaining those by applying YOLO to the rectified images.

Which computing platforms are supported by HDK 2.0?

The HDK includes an NVIDIA Orin processing unit. Hammerhead can also be ported to other processors as a custom project.

What is included in NODAR’s SDK?

The SDK includes all the software needed to evaluate and integrate with Hammerhead. The software includes applications for demoing Hammerhead, collecting data, calibrating the initial camera setup, and C++ and Python APIs. The NODAR Viewer is provided for depth and point-cloud visualization. Our GridDetect occupancy map software is available as an add-on option.

Why should I include GridDetect with the SDK?

GridDetect is a high-performance, GPU-accelerated implementation of a deterministic particle filter algorithm for occupancy grid creation. It converts dense 3D point-cloud data into a robust, real-time understanding of free space and obstacles. 

In practical terms, GridDetect adds three key capabilities to the SDK:

  • High-throughput performance
    Processes up to 100 million 3D points per second, enabling real-time operation with dense, long-range stereo data.

  • Robust ground removal
    Effectively separates ground from obstacles, supporting detection of objects as small as a 15 cm brick at 150 m on a highway, as well as subtle features like emerging crops on uneven agricultural terrain.

  • Terrain-aware reasoning
    Correctly handles slopes, hills, and ramps without misclassifying them as obstacles.

Together, these capabilities allow developers to move beyond raw depth data and achieve stable, long-range obstacle detection suitable for automotive, agricultural, and industrial autonomy applications.

What cameras models are supported by the SDK?

The NODAR SDK is camera-agnostic. We have tested rolling shutter and global shutter RGB cameras, LWIR cameras, and resolutions from 1MP to 8MP.
While any camera is compatible, optimal performance is achieved with synchronized cameras that have overlapping fields of view and provide uncompressed images. The system supports native resolutions up to 8MP, but this is subject to available GPU memory; higher resolutions will require downsampling.

Does the SDK output classify specific objects, like a car, or the occupied grid coordinates and depth?

GridDetect is a high-performance, GPU-accelerated implementation of a deterministic particle filter algorithm for occupancy grid creation. It converts dense 3D point-cloud data into a robust, real-time understanding of free space and obstacles. 

In practical terms, GridDetect adds three key capabilities to the SDK:

  • High-throughput performance
    Processes up to 100 million 3D points per second, enabling real-time operation with dense, long-range stereo data.

  • Robust ground removal
    Effectively separates ground from obstacles, supporting detection of objects as small as a 15 cm brick at 150 m on a highway, as well as subtle features like emerging crops on uneven agricultural terrain.

  • Terrain-aware reasoning
    Correctly handles slopes, hills, and ramps without misclassifying them as obstacles.

Together, these capabilities allow developers to move beyond raw depth data and achieve stable, long-range obstacle detection suitable for automotive, agricultural, and industrial autonomy applications.

How is NODAR different?

NODAR’s Hammerhead technology enables unparalleled detection of small objects at large distances by supporting large-baseline mounts (0.5-3m or larger) with independently mounted (no connecting bar), high-resolution cameras. This is accomplished with patented algorithms that automatically calibrate in real-time to compensate for misalignments that arise from larger baselines.

Does Hammerhead work in low visibility?

With recent improvements in camera sensors, Hammerhead is effective in low-light conditions, such as city streets or headlight-illuminated scenes at night. We have also tested well against LiDAR in dusty, rainy, and foggy conditions, and Hammerhead technology can also be adapted to work with infrared or thermal cameras.

What is included in Hammerhead’s HDK 2.0?

The HDK includes both hardware and software to evaluate and integrate with Hammerhead. The hardware is comprised of an Electronic Control Unit (ECU), 2 cameras, and all necessary connection cables. The software includes applications for demoing Hammerhead, collecting data, calibrating the initial camera setup, and an integration API.

How long does the installation take?

The HDK is designed to work immediately out of the box. You would only need to connect a power source and a display to see Hammerhead technology in action.

How does calibration work?

Stereo cameras are known to lose alignment due to vehicle vibration, temperature fluctuations, and long-term, small component movements. Using unique patented technology, Hammerhead automatically calibrates on every video frame to compensate for these alignment variations.

What mounting configurations are supported for HDK 2.0?

The entire HDK camera assembly can be mounted to a vehicle or structure via tripod mounting holes. Even though the cameras in the HDK are mounted at a fixed baseline, production versions of Hammerhead can be shipped with custom baselines and mounts depending on specific application requirements.

What is the HDK's camera resolution?

As shipped with the HDK, a 5-megapixel resolution is supported. If desired, additional resolutions can also be supported with different camera hardware.

What computing platforms are supported by the SDK?

The SDK is provided as prebuilt .deb packages for the following configurations:

  • Ubuntu 20.04
    CUDA 11.4 (AMD64, ARM64)
    CUDA 12.0 (AMD64)

  • Ubuntu 22.04
    CUDA 12.1, 12.2, 12.3, 12.6, 13.0 (AMD64)
    CUDA 12.2, 12.6 (ARM64)

  • Ubuntu 24.04
    CUDA 12.9, 13.0 (AMD64)

Typical performance (5.4 MP images)

  • Jetson Orin AGX: ~5–10 fps

  • NVIDIA RTX A5500 (Laptop): ~15–20 fps

  • NVIDIA GeForce RTX 4090 (Desktop): ~20–25 fps

Performance varies based on configuration, including image resolution (1–8 MP), bit depth (8-bit vs. 16-bit), and whether optional modules such as GridDetect are enabled.

As a general guideline:

  • Modern laptops typically achieve ~15–20 fps

  • Desktop systems with high-end GPUs typically achieve ~20–30 fps

What are the minimum recommended compute specs for the SDK?

We currently require an NVIDIA GPU. Our binaries rely on Cuda and target Ubuntu 20.04, 22.04, and 24.04 for ARM and AMD64 (Intel and AMD Cpus). For a complete list of supported systems, please visit https://docs.nodarsensor.net/index.html#supported-systems

What is Hammerhead's maximum supported distance?

Hammerhead can detect objects at large distances. With the 16mm (30° field of view) lens option, humans are clearly detectable at 500m. Even with the wider 7mm (65° field-of-view) lens option, humans are clearly detectable at 200m.

How is software integration done for HDK 2.0?

The HDK ships with ROS2 and C++ APIs, along with thorough documentation at https://github.com/nodarhub. Other integration options can be provided as a custom effort.

What type of support is available?

The HDK includes access to a support portal and a dedicated support email address. We aim to reply to critical issues within 24 hours. The SDK includes 12 months of software updates and full developer documentation. For premium support options, contact support@nodarsensor.com.

Can I input stereo data we’ve collected using other hardware?

The SDK provides occupied grid coordinates and depth. For object classes (car, motorcycle, etc), our customers have had success obtaining those by applying YOLO to the rectified images.

Which computing platforms are supported by HDK 2.0?

The HDK includes an NVIDIA Orin processing unit. Hammerhead can also be ported to other processors as a custom project.

What is included in NODAR’s SDK?

The SDK includes all the software needed to evaluate and integrate with Hammerhead. The software includes applications for demoing Hammerhead, collecting data, calibrating the initial camera setup, and C++ and Python APIs. The NODAR Viewer is provided for depth and point-cloud visualization. Our GridDetect occupancy map software is available as an add-on option.

Why should I include GridDetect with the SDK?

GridDetect is a high-performance, GPU-accelerated implementation of a deterministic particle filter algorithm for occupancy grid creation. It converts dense 3D point-cloud data into a robust, real-time understanding of free space and obstacles. 

In practical terms, GridDetect adds three key capabilities to the SDK:

  • High-throughput performance
    Processes up to 100 million 3D points per second, enabling real-time operation with dense, long-range stereo data.

  • Robust ground removal
    Effectively separates ground from obstacles, supporting detection of objects as small as a 15 cm brick at 150 m on a highway, as well as subtle features like emerging crops on uneven agricultural terrain.

  • Terrain-aware reasoning
    Correctly handles slopes, hills, and ramps without misclassifying them as obstacles.

Together, these capabilities allow developers to move beyond raw depth data and achieve stable, long-range obstacle detection suitable for automotive, agricultural, and industrial autonomy applications.

What cameras models are supported by the SDK?

The NODAR SDK is camera-agnostic. We have tested rolling shutter and global shutter RGB cameras, LWIR cameras, and resolutions from 1MP to 8MP.
While any camera is compatible, optimal performance is achieved with synchronized cameras that have overlapping fields of view and provide uncompressed images. The system supports native resolutions up to 8MP, but this is subject to available GPU memory; higher resolutions will require downsampling.

Does the SDK output classify specific objects, like a car, or the occupied grid coordinates and depth?

GridDetect is a high-performance, GPU-accelerated implementation of a deterministic particle filter algorithm for occupancy grid creation. It converts dense 3D point-cloud data into a robust, real-time understanding of free space and obstacles. 

In practical terms, GridDetect adds three key capabilities to the SDK:

  • High-throughput performance
    Processes up to 100 million 3D points per second, enabling real-time operation with dense, long-range stereo data.

  • Robust ground removal
    Effectively separates ground from obstacles, supporting detection of objects as small as a 15 cm brick at 150 m on a highway, as well as subtle features like emerging crops on uneven agricultural terrain.

  • Terrain-aware reasoning
    Correctly handles slopes, hills, and ramps without misclassifying them as obstacles.

Together, these capabilities allow developers to move beyond raw depth data and achieve stable, long-range obstacle detection suitable for automotive, agricultural, and industrial autonomy applications.

Feature title.

This is a feature description spanning a couple of lines.

Feature title.

This is a feature description spanning a couple of lines.