Published By National Aeronautics and Space Administration
Issued over 9 years ago
Summary
Description
<p>Initial (ICA) Investigation:</p><p>Preventing collisions is the first priority for safe operations of the Space Station Remote Manipulator System (SSRMS).&nbsp; This depends on the ability of the crew and flight controllers to verify enough clearance exists between the SSRMS, its payload, and surrounding structure.&nbsp; In the plan, train, and fly stages of each mission significant time is spent developing, documenting, and executing a camera plan that allows each portion of the SSRMS trajectory to be monitored.&nbsp; This time could be decreased and operational situational awareness increased by using an array of cameras mounted around a boom on the SSRMS that point along a boom.&nbsp; The output of the these cameras could be stitched together to provide a one composite view that provides clearance monitoring 360&deg; around the boom.&nbsp; Further, this technology could be used in any application where it is desirable to see proximity on two or more sides of an object - surgery, tele-robotics, deep sea exploration.</p><p>This investigation will ask operators (crew and flight controllers) to compare clearance monitoring of a sample trajectory using conventional external camera sources versus a stitched video presentation from a camera array.&nbsp; A test plan, script, and scoring for comparison will be used to determine if stitched camera arrays lend themselves to clearance monitoring.&nbsp; The project investigator&nbsp;researched the required technology, including hardware and software, to perform video stitching&nbsp;to identify an approach that can be used for operator evaluation in the ICA project.&nbsp;</p><p>Initial (Innovation Charge Account Project (ICA)) results:</p><p>A cadre of robotics professionals from JSC Robotics Operations and Astronaut Office participated in a benchmarking effort to quantify efficiency and safety metrics both with and without the use of a stitched camera array. &nbsp;A modified Cooper-Harper scale was used to determine operator workload. &nbsp;Other metrics included time required to perform task, motion stopped due to lack of clearance views, whether contact was made with external structures. &nbsp;Results showed a reduced operator workload, faster completion of the task, and reduced contact with external structure. &nbsp;Additionally, the technology was presented to the JSC community at Innovation Day 2012 where it won the People&#39;s Choice Award.</p><p>Second Phase:</p><p>Rearranging image pixels from multiple cameras to accomplish a perspective shift is computationally expensive.&nbsp; In the last decade, advances in CPU performance and direct to memory image capturing methods have improved the frame rate and latency associated with video stitching.&nbsp; In&nbsp;the previous phase&nbsp;(FY &rsquo;12 ICA, People&rsquo;s Choice Winner), &nbsp;the collaborator was able to achieve 10 frames per second with less than a second latency using off the shelf CPU and camera hardware. &nbsp;The purpose of Phase 2 is to demonstrate the technology on a larger vehicle (Multi-Mission Space Exploration vehicle) using high-bandwidth (GigE) network, increased CPU/GPU resources, and high-performance cameras.&nbsp;</p><p>Second Phase Results:</p><p>Ten video cameras (the minimum required to obtain coverage around the vehicle while providing enough image overlap) were placed around the upper surface of the MMSEV.&nbsp; The video streams were piped to an on-board high-end PC where software written in MATLAB performed the perspective shifts and homographic alignment.&nbsp; The resulting single view was displayed in Graphical User Interface (GUI) that allowed the operator to see the composite &lsquo;birds-eye&rsquo; view or zoom in on a view from a particular camera when clearance was a concern.&nbsp; The MMSEV was maneuvered around the simulated Martian landscape at JSC known as the Rock Pile.&nbsp; To date the maximum achieved frame rate is 2 frames per second.&nbsp; To increase frame rate current efforts are focused on transferring the homographic algorithms to a Xylinx field-programmable gate array (FPGA) processor. &nbsp;</p><p>&nbsp;</p>