Datasets / Non-Scanning 3D Imager for Autonomous Rendezvous and Docking Project


Non-Scanning 3D Imager for Autonomous Rendezvous and Docking Project

Published By National Aeronautics and Space Administration

Issued over 9 years ago

US
beta

Summary

Type of release
a one-off release of a single dataset

Data Licence
Not Applicable

Content Licence
Creative Commons CCZero

Verification
automatically awarded

Description

<p>One of the technology pushes in the Human Exploration and Operations (HEO) line-of-business (LOB) is to develop low-latency telerobotics / robotic servicing technologies that allow for “multiple modes of control” by humans in proximity (when they are in orbit) and by human teams on Earth (like the Mars Exploration Rovers for robotic command of a complex, obstacle-ridden environment)... after humans leave orbit and allows for real-time situation awareness would help improve robot safety.  High quality imaging sensors that have small size, weigh and power (SWaP), long to short range (km to meters) scanning capabilities with cm type resolution are necessary for proximity sensing or situation awareness and obstacle avoidance.   We propose to develop a non-scanning flash lidar 3D imaging system with goals of a low SWaP using TRL components that leveraging substantial in-house hardware.</p><p>In this program, a laser with beam shaping optics generates a 32x32 pattern, which matches the sensor pixel layout on the Geiger-Mode Avalanche Photodetector (GM-APD) camera.  This grid pattern is then used to illuminate the target.   A start pulse from the laser triggers the timer on the camera, as the reflected photons from the target reach the camera, the time of flight (TOF) information per pixel will be captured.  This TOF information are used to form the topographic image of the target.  The GM-APD camera is a 2<sup>nd</sup> generation camera from Spectrolab that has a much improved sensor, which factor of 16 reduction in dark count (now is 5 kHz).  This upgrade will allow for a more sensitive detection sensor.  Other important features for this 2nd generation camera are:</p><ul><li>Single Photon Sensitivity</li><li>32x32 Geiger-mode Focal Plane Array</li><li>User-defined Range Gate</li><li>User-defined Windowing (2x2, 4x4, etc.)</li><li>Non-Uniform Bias (NUB) Correction for Improved Efficiency</li><li>Camera Link</li><li>Custom Processor</li><li>Custom Image Processing Software</li></ul><p>Several of our key technologies enable this development (in contrast to the presently available systems):</p><ol><li>Use of a high-efficiency (> 50%) high power short pulse semiconductor monolithic master-oscillator-power-amplifier laser (vs. present < 10% solid state crystal lasers);</li><li>Photon–counting detector based time-of-flight camera  (vs. present poor sensitivity PIN arrays);</li><li>Efficient optics that allow low-cost uniform array generation with laser spot-to-detector pixel one-to-one imaging vs. the present non-uniform illumination.</li></ol><p>We will design, build and test an integrated non-scanning eye-safe 3D imaging system. We will conduct laboratory and open-path 3D imaging experiments using our 1.5 km distance cell-phone-tower test range. We will work with the Goddard robotic servicing team and use their test facility to demonstrate real-time robotic operations</p>