Select Page

Episode 20: MITRE on The International Space Station

By Sonya Burroughs
Play Now

Preview

Show Notes

Stefan Doucette has been worked on MITRE research that focuses on neuromorphic sensors on the International Space Station.

Transcript

Full text of transcript

Paul: [00:00:00] Here’s the analogy. You try to get to work every day and you go from where you live to your work and you’re going to drive your car. The way it is in space today, if I make that an analogy to you driving to work is, you get on the road every minute. You can only open your eyes for one second and then you have to close them again and wait until the next minute Do you think you would get to work without either being hit or hitting someone else? That is the challenge we have in space. We just don’t have enough eyes in space and the whole point of at least from a military point of view, the basic things we do is understanding what’s up there and where is it going?

Is it a semi next to you? Is it a motorcycle next to you? That’s understanding, what is the object? Not just where is it? And where is it going? And what’s its intent? Is it a threat to you or not? Is there an erratic driver around you that you need to get away from. In space, we have threats out there. Understanding what’s up there, where’s it going, [00:01:00] characterizing it and the intent is key to space warfighting.

Sonya: Hello! Welcome to MITRE’s Tech Futures Podcast. I’m your host, Sonya Burroughs. At MITRE, we offer a unique vantage point and objective insights that we share in the public interest. And in this podcast series, we showcase emerging technologies that will affect the government and our nation in the future. Today, we’re talking about MITRE on ISS.

Sounds fascinating. Right.

 Before we begin. I would like to mention that this podcast is made possible by MITRE’s independent research and development program which funds projects and addresses the critical problems and priorities of our government sponsors. We do that through applied research that reflects our sponsors’ near-, mid-, and far- term research needs. Now, without further ado, I bring to you MITRE’s Tech Futures Podcast, episode number 20: “MITRE on ISS”.

We just heard about the need and importance for eyes and [00:02:00] space. From the clever analogy about. closing and opening your eyes in one-minute intervals while driving. Definitely don’t try that. However, it does give us clear insight on the importance of more eyes in space.

And we’re going to hear from a researcher who’s doing just that, putting more eyes in space. Stefan Doucette leads, research in Rapid Prototyping and Analysis. His research focuses on neuromorphic sensors on the International Space Station. The International Space Station also known as ISS is a known spacecraft in orbit around earth. Let’s hear more from him about these new eyes in space.

Stefan: We had a couple of neuromorphic sensors. We were putting them on telescopes and taking them to firing ranges and doing stuff on the ground with them. It was a logical extension of what could be done if these were in space. We became aware of what the SPARC, the Space Physics Atmospheric Research Center at the Air [00:03:00] Force Academy was planning on doing, which was building a sensor package that had two of these as the main thrust of the research and installing them on the International Space Station.

Sonya: Wow, neuromorphic sensors on the international space station. That is impressive. Neuromorphic sensors work similar to our biological systems like eyes, being able to detect events quickly. Let’s dig into some possibilities of neuromorphic sensors. What can they be used for?

Stefan: The neuromorphic sensors themselves aren’t particularly sensitive. That’s not their strength. High dynamic range and temporal resolution is, so a natural target for these sensors is things that happen quickly and maybe happen over long periods of time. A natural pairing of this with neuromorphic sensors was to put it on a telescope and look into space and see what was possible. We attach these to a telescope and found that the high [00:04:00] dynamic range of these sensors allowed us to observe satellites during the day when typically you would not be able to observe them in visible wavelengths because of its high temporal resolution. We took a couple of these sensors to a firing range and set up a field experiment to capture bullets and flight . Aside from that, staring at complicated things with these sensors was interesting. We Had some data taken around dusk and pointed the sensor up in the sky and was able to capture everything that was flying which included bugs and bats, and interestingly the flight signatures of these different objects were immediately evident.

Sonya: Neuromorphic sensors differ from traditional sensors as they are able to sense fast changes over time and process speedily unlike traditional sensors. Let’s turn to a space expert to give us more insight on sensors in space. In the open we heard from Paul Hartman on the importance of eyes in space. Paul is the chief engineer in our [00:05:00] Space War Fighting division at MITRE. He’ll explain the need for this new morphic sensor research.

Paul: This whole neuromorphic thing is neat because it’s not a time-based sensing system, it’s an event-based system, which in my mind is really critical to be able to identify things quickly, particularly in space. Objects move tens of thousands miles an hour, so if you have a little, soda straw sensor beam that’s looking out and something goes across that, the challenge we have with most of our sensors today that are time-based and how they do imaging, you’re likely to miss it as you are to collect on it. These kinds of neuromorphic systems will really help us capture events better and hopefully in more detail, the phenomenology is a whole kind of a different thing, but that’s why we’re doing the research in this is to figure that out, figure out where it’s most useful, figure out what sponsors might be interested in that and try to proliferate that and improve our [00:06:00] sensing domain.

Sonya: I appreciate that explanation on the need for sensors in space. It’s great to see that there are legitimate uses for neuromorphic sensors in space. For those of us without a background in sensors, let’s get the breakdown on how neuromorphic sensors work.

Stefan: Like a traditional camera, there’s a photo sensor at each pixel, but instead of reading the magnitude of brightness at each pixel, it detects the changes of brightness and that’s done by simplifying it at each pixel and comparing previous values of brightness to the current sample. Essentially, if that crosses a threshold, there’s an increase or decrease of brightness. Then, it sends out a packet of information that says what its pixel location is, X and Y, and what its change of brightness was, whether it’s brighter or darker. It doesn’t actually tell you how much brighter or darker it is.

 Once it sends out that information, the threshold is reset to the new value, and [00:07:00] then it continues that cycle. Since it’s doing that actually at a hardware level not through image processing, it’s able to do that very quickly. Comparing happens at a very high rate, which is what allows it to be effectively a high-speed camera at the high temporal resolution

Sonya: This won’t be the last time you hear Stefan compare neuromorphic sensors to cameras. This makes you curious on what’s going on under the hood. What kind of data is coming from these sensors?

Stefan: The packet of information that’s sent out is a X and Y location, the change of brightness and a fairly high-resolution timestamp. Because each pixel has a couple of comparators, it’s reporting out these events when they happen not when you tell a camera to expose. Like a traditional camera, however many pixels there are, each one of those is going to expose for a given exposure time at a given gain level, and that information gets moved off of the chip. Then, you get [00:08:00] to have your image.

With neuromorphic sensors, that movement of data is initiated by what happens in the scene. In a perfect system with no noise, if there’s no change in the scene then there’s no data generated. The amount of data generated is proportional to the activity in the scene. In this case, the activity is specifically the change of brightness. What’s streamed off of the chip then is just a list of events, but these are change of brightness, pixel location, and time. They’re streamed at whatever rate is appropriate for them for their generation. What you then have, whether it’s live or later, is a table of information and that table is very different than what you would expect from a camera.

Again, which is just X and Y array with intensity levels. Making sense of that table is part of the challenge of our research and is challenging the community. You can convert that table where each [00:09:00] pixel is likely firing at different times or could be firing at different times. You then can convert that into an image and make a video out of it and theoretically run traditional image processing techniques on it, but you would be losing the temporal resolution. That is one of the benefits of these sensors to begin with.

Sonya: Stephen talked about the challenges of understanding the data coming off of neuromorphic sensors. Because of this, algorithms need to be explored. Think of an algorithm as a pathway to solving a problem. In addition, you’ll also hear terms such as machine learning and AI. Machine learning is a set of algorithms and models that are used to analyze patterns as well as learn and adapt. Machine learning is a type of Artificial Intelligence also in short AI which focuses on machines learning from experience. This can be used to make sense of data. We’ll hear from Paul and Stefen about that perspectives on machine learning and AI in this [00:10:00] space.

Paul: I think the AI might inform if it can be a lighter lift on the human. We have enough data. Typically, the problem I think we have particularly on the imaging side is we don’t have enough analysts to look at it, so if we can use automation tools to be able to get data in a format based on certain criteria that does that end work that an analyst currently has to do today. Then, the analyst can focus on the real important stuff because the AI is going to stream information in, based on this set of rules.

Stefan: I can say that because of the kind of native format of the data coming off of these sensors and the fact that you can convert it to images, but you’re really losing the strength of the sensor that way, it needs new methods of processing this data, and that happens to coincide well with, machine learning techniques, so looking for patterns and to be able to connect events in time with [00:11:00] some physical meaning. There needs to be some processing algorithm to be able to do that.

The data that comes off of the neuromorphic sensors, we call it a data cube, where in the x- and y- direction is the pixel locations, and in the z- direction is time. It’s a lot harder to make sense and connect events in space and time with each other.

Existing MITRE research before looked at the software that had been developed and assessed the applicability of that to the data, both to the International Space Station data because that data is coming from a particular kind of neuromorphic sensor, but also the scene that it’s observing is much different than anything on the ground. We looked at a couple of algorithms that MITRE had developed, including frequency domain source segmentation which essentially looks for periodic signals in the scene. We looked at graph representation, which uses machine learning to connect events in space and time. A good example of that [00:12:00] is a neuromorphic sensor was used to capture the rotors of a quadcopter and using this algorithm, we are able to create these kind of helical spiraling graph structures that connected physical attributes of the rotors through time and as they rotated through time and space. We also looked at anomaly detection, which is machine learning to try to create a baseline of what was being seen in the International Space Station field of view. Some noisy events or periods of time where there’s bad data, it would attempt to flag it as such. We then used streak detection, so we developed a streak detection algorithm to look through the data coming off of the International Space Station and look for objects nearby.

Sonya: It’s great to hear the positive feedback on the use of AI and machine learning to find meaning in neuromorphic sensor data or applied to space in general. Many algorithms we’re exploring in Stefan’s research which is great to see. [00:13:00] Let’s close with Stefan and Paul’s takes on the future of neuromorphic sensors in space.

Stefan: This is definitely the first time that the research community has access to this type of data from space looking down. The International Space Station and other satellites have hosted sensor packages with traditional cameras for years, but this is the first opportunity for this. These sensors are seemingly promising, but they’re also quite difficult to work with, and they haven’t fully been demonstrated their usefulness in space or otherwise over other traditional sensors phenomenon. It’s a testament to the MITRE internal research. That we’ve been able to work with these sensors for so long. It really is a long-term vision to see that these might be useful and to try to extract and understand that value.

Paul: From a MITRE standpoint, we have a robust research program across many domains. I believe this can apply to all domains [00:14:00] so air, space, ground, maritime, and particularly in the space domain. We just don’t have a lot of sensors looking into space to track everything as it stands on the ISS right now, so that would be a real important one to see if on top of radars, on top of electro optical telescopes, on top of RF-based sensors that are emitting that we can track if this neuromorphic one can be one more that’s added and then work a concept of operations because the technology is great.

But unless you can give a warfighter kind of the rules of the road on how they’re supposed to operate all these things in concert, there’s going to need to be some research to figure out how to integrate all these things together in a way that makes sense, that’s timely, that the operators can understand and use in a moment’s notice.

Sonya: The future of neuromorphic sensors in space seems to be quite promising. Appreciate the insight from Stephan Doucette and Paul Hartman. If anyone wants to follow up on this exciting research. Check out the publicly released paper. Novel Algorithms for Novel Data: [00:15:00] Machine Learning for Neuromorphic Data from the International Space Station by Stefan Doucette and fellow collaborators.

Thanks for tuning into this episode of MITRE’s Tech Futures Podcast.

I wrote, produced and edit the show with the help of Dr Kris Rosfjord and Dr. Heath Ferris, Technology Futures Innovation Area Leaders, Tom Scholfield, Media Engineer, and Beverly Wood, Strategic Communications.

Our guests for this episode, included:

Stefan Doucette and Paul Hartman.

The music in this episode was brought to you by Ooyy and Truvio.

Copyright 2024 MITRE PRS number 24-0685. January 1st, 2024. MITRE solving problems for safer world.

Meet the Guests

Stefan G. Doucette

Stefan Doucette is a sensors engineer in the space domain at the MITRE Corporation. During his seven years at MITRE, Stefan has split his time as a principal investigator in MITRE’s internal research and development program, and as a technical advisor to the Unites States Air Force and United States Space Force. Stefan has lead research projects in the field of space sensing with subjects like prototyping ground-based telescope collection systems, machine learning-based spacecraft characterization using spectral observations, and neuromorphic sensing. As a subject matter expert in sensing, astrodynamics, and software engineering, Stefan is a technical advisor to Space Based Overhead Infrared System (SBIRS) research being executed at Space Systems Command’s (SSC) Tools, Processing, and Applications (TAP) Lab. Stefan resides in Los Angeles, California, and has a Bachelor of Mechanical Engineering and Minor in Aerospace Engineering from University of Colorado and a Master of Science in Software Engineering from Johns Hopkins University.

Paul G. Hartman

Paul Hartman is the Division Chief Engineer for the Space Warfighting Division at MITRE. Paul joined MITRE in 2014 following 11 years with a for-profit defense contractor and 20-year US Air Force career as a space operations and acquisition officer. He has supported the full life-cycle design, development, testing, launching, fielding, operations, maintenance, and disposal of military and intelligence space systems. Since joining MITRE, Paul has served as a group leader, helping to guide and grow the MITRE Enterprise for Space Analysis (MESA) Team. His key domain skills include space control/space situational awareness, force enhancement, and space support. Paul earned a B.S. degree in Aerospace Engineering from Texas A&M University and an M.S. degree in Astronautical Engineering from the University of Texas at Austin.