NASA Using Machine Learning To Make Space Safer

By John Oncea, Editor

Beyond making missions safer, this tech also improves the experience for engineers back on Earth, helping them understand the complexities of spaceflight environments and how objects interact and relate to each other.
I started watching Apple TV’s Invasion the other day. In it, Earth is visited by an alien species that threatens humanity’s existence. Events unfold in real time through the eyes of five ordinary people across the globe as they struggle to make sense of the chaos unraveling around them.
It’s … fine. Or, as Roberto Pilla writes on Google Review, “As someone who doesn’t normally watch sci-fi this has been a fantastic introduction because of the human element. I was in the mood for a series not just about an alien invasion, but about what that would look like to everyday people across the world. Invasion achieves this with its character building. Most complaints about the slower pace fail to offer up any other way to create characters who we can invest in, or whose lives we can understand both pre- and post-invasion.”
Anyway, in season one’s third episode, Orion, one of the characters, Mitsuki, is watching video of an incident that took place aboard a Japanese space shuttle. The video is choppy, making it difficult for Mitsuki to figure out what is going on.
I won’t tell you anymore in case you want to check the show out, but the inability to easily identify and analyze space videos and images sent by cameras guiding robots, inspecting spacecraft, and navigating distant surfaces doesn’t exist only in the minds of screenwriters.
For instance, during the Apollo 13 mission, ground teams relied on limited and real-time telemetry data from the spacecraft to understand the nature of an explosion that crippled the mission. Analyzing the available data and images quickly was crucial to diagnosing the issue, reconfiguring the spacecraft, and developing a strategy to bring the astronauts back safely. The limited resolution of data from onboard cameras and sensors made that task all the more difficult.
Then there’s the Columbia Space Shuttle disaster. During the STS-107 mission, foam insulation from the external fuel tank struck the shuttle’s wing on launch. Engineers analyzing launch video debated the severity of the damage but lacked clear real-time imagery of the affected area. The inability to assess the damage in detail contributed to the loss of the shuttle and its crew during reentry.
Today, technology is being used at NASA’s Johnson Space Center to process images and videos in real time using state-of-the-art machine learning tools, identifying important spacecraft hardware and other objects. With more informative visuals, astronauts can use the NASA Object Detection System to make faster decisions with better information, whether they’re navigating the surface of Mars or fixing equipment in orbit.
The Challenges Analyzing Space Video
Real time interpretation and analysis of videos and images from space can be challenging for several reasons, starting with distance and resolution. Images and videos captured from space cover vast distances, leading to lower resolution and detail compared to ground-based observations making it difficult to discern fine details or distinguish between objects. In addition, air disturbances, sensor noise, and space-based sensors (like satellites or probes) distance from Earth can complicate real-time analysis, according to viso.ai. These variations can affect the accuracy and reliability of automated detection systems leading to inaccuracies in the data. Real-time analysis requires accounting for these delays, which can affect decision-making and responsiveness.
Then there’s the complexity of data. Space imagery often contains complex patterns and phenomena that require specialized knowledge to interpret correctly. Features like cloud cover, atmospheric effects, and varying lighting conditions can obscure or distort the information being captured. This preprocessing can be time-consuming and computationally intensive, making real-time analysis difficult.
Beyond that, real-time analysis of large volumes of data requires powerful computational resources. Processing algorithms need to manage vast amounts of data quickly and efficiently to extract meaningful insights. Then, to fully interpret space imagery it needs to be integrated with other sources of information, such as historical data, weather patterns, or models of celestial mechanics. This integration can add complexity and require sophisticated analysis techniques.
Finally, the human factor. Interpreting space imagery can involve subjective judgments and interpretations. Training and experience play a crucial role in correctly analyzing the data and making informed decisions based on it.
These challenges highlight the complexity and importance of developing systems and techniques for real-time analysis of space-based imagery and video data. One of those developments is machine learning.
How Machine Learning Is Helping
NASA’s move to enhance operation imagery and video for human exploration, according to TechPort, is taking “a multifaceted approach toward enhancing imagery and video analysis capabilities through automated techniques, including artificial intelligence (AI) and machine learning (ML). There has been a recent uptick in activity around machine learning-powered visual data analysis and processing, and we aim to leverage this toward NASA’s unique use cases.”
Among the desired outcomes of the project are the enhancement of metadata through ML-based labeling, including object detection, segmentation, and text detection and recognition, training models to detect NASA data, and providing a fine-grained search tool for imagery and video.
In addition, NASA hopes to build out a framework for ML training with few training samples, through techniques like copy-and-paste data augmentation and few-shot learning, enable video analysis through tracking objects across frames, and reduce downlink burden through machine learning-based compression and reconstruction.
The final three goals are to examine image quality metrics and decide if they can determine relative quality rankings of images, explore anomaly detection via ML, and examine the portability and use of these models to flight-like hardware.
“These advancements in ML-enabled processing will increase efficiency across imagery analysis workflows,” writes TechPort. “In particular, ML metadata like ‘what objects are in this frame’ and ‘what text is detected in this photo’ can help flight controllers and mission operations personnel quickly query the most recent time that a tool was used by a crew member, the last time a tagged handrail or module was visible in an image, or gather all of the imagery containing ‘angled tongs’ from specific date ranges or EVAs. Additionally, this metadata can be added in near-real-time and can reduce the burden on human catalogers.”
NASA feels its exploration of techniques to train on specific, rare (few training samples) data will enable it to prepare models with pre-flight imagery/video, for immediate use once hardware is in space, an important capability that ensures models are up to date with the latest hardware on station or other missions, and ensures that there is fast access to searchable object imagery, without having to wait to train a model on flight data.
Other Uses Of Machine Learning And AI By NASA
NASA’s use of machine learning isn’t necessarily new. The James Webb Space Telescope (JWST) is utilizing AI-powered software to remove noise from images of remote galaxies, enabling clearer and more detailed observations and the agency is utilizing AI and machine learning methods on its Advanced Data Analytics Platform to analyze the growing volume of Earth science imagery, estimated at 100 petabytes.
Future uses of the technology include onboard the Nancy Grace Roman Space Telescope, set to launch by May 2027. AI will be used to process an unprecedented volume of infrared and optical data and is expected to complete observations in a year that would take Hubble or JWST around a thousand years.
In addition, AI and ML algorithms are being developed to create an alert system for Roman, flagging and tracking phenomena like exploding stars. This system is crucial for filtering through the massive amount of data the telescope will generate.
Beyond that, NASA is exploring the use of AI technologies to interpret data from missions like TESS (Transiting Exoplanet Survey Satellite) to aid in the search for exoplanets and potential alien life, as well as on its Advanced Data Analytics Platform (ADAPT) to analyze the growing volume of Earth science imagery, estimated at 100 petabytes, according to NASA Center for Climate Simulation.
One last way NASA is using AI and machine learning is in the creation of a simulated universe called OpenUniverse, which uses supercomputers and AI to generate over a million synthetic images. This dataset will help scientists prepare for and optimize observations from upcoming telescopes like Roman, Rubin, and Euclid.
“We used a supercomputer to create a synthetic universe and simulated billions of years of evolution, tracing every photon’s path from each cosmic object to Roman’s detectors,” said Michael Troxel, an associate professor of physics at Duke University in Durham, NC, who led the simulation campaign. “This is the largest, deepest, most realistic synthetic survey of a mock universe available today.”
These advancements demonstrate NASA's commitment to integrating AI and machine learning into its space observation and data analysis processes, significantly enhancing its ability to interpret and utilize space imagery and video data.