We are excited to announce the rollout of Phase 2 for the Stardust@home foils project. For those of you who are new to the community, NASA flew a spacecraft called Stardust which brought back samples of dust from interstellar space. You may want to read about the origins and purpose of this project here. But in short, for the foils project we are looking for tiny craters in a relatively large aluminum metal surface — the equivalent of finding vintage cars in all of Arizona from satellite images.
When we originally began, we were faced with the onerous task of sorting through millions of images to find these extremely rare bits of treasure, and the only hope we had of completing this task was to bring enough human eyes to the table to pore laboriously through every single image. To see how we’ve been doing this to date and what we’ve been telling our volunteers to click on in what we call our “Virtual Microscope,” check out the video blog here. But suffice it to say, most of these images look something like this:
In recent years though, a new computer technology has emerged called “deep learning” which enables computers to recognize and categorize images accurately. Most everyone has seen deep learning in action already. For example, when you search for an image of a car on Google, you are presented with car images, not cats, or rocket ships, or lawns. Yet, behind the scenes, no person picked which images to show you. Instead deep learning makes this possible. We won’t go into how deep learning works here but if you are the kind of geek who is into that, then check out this intro.
However, deep learning still hasn’t gotten quite good enough to be able to find our microscopic extraterrestrial samples all by itself. But it is good enough for us to harness it and combine it into something very promising – what we’re calling our “cyborg” approach.
Our scientists put together a deep learning program using Google’s deep learning technology. In order for it to find craters, it first needs to know what a crater looks like. For this we leaned on some friends of ours at the University of Kent where they have a special gun called a light gas gun that shoots materials several times faster than a speeding bullet. For this project, they shot microscopic glass spheres at aluminum foil and sent it to us. We were then able to take pictures like the following in order to show the program what the craters we are interested in look like:

Many thanks to Penny Wozniakiewicz and Mark Price at the University of Kent for preparing these craters for us!
We then used a little digital magic in order to feed about 10,000 images of gun-made craters into a powerful computer alongside 10,000 images of aluminum foil that did not have craters. (We would like to take this moment to give thanks to the high performance computing center at the Lawrence Berkeley National Lab and especially to David Shapiro and the folks at the COSMIC beamline whose computers made this possible.) We then had the computer guess whether there was a crater in each image, and showed it the correct answers after it finished guessing. After each round of images it would adjust itself in order to do a better job at guessing the next time. We continued this until it got the answer right more than 99.9% of the time. Pretty good!
We were now ready to have the computer look at the aluminum foil from Stardust, and this very fast computer can flip through all of the images of the spacecraft collector in about the time it takes for you to eat dinner. Yes, that’s right, it looks through as many images in one hour as a person can look at if they work on it nonstop, all day, every day, for about a year with no breaks!
However, in the real world, even with 99.9% accuracy on perfect data, the computer can still make mistakes when given an image unlike any it has “seen” before. This is true of humans too! For this reason, it is still not possible to pick out all the real interstellar dust craters using only the computer. It always includes a few images that have objects that look like craters but aren’t, or which confuse it for some other reason.
This is where the cyborg approach comes in and why we still need volunteers like you. Now, instead of requiring a person to look at every image in our database, we can eliminate about 98% of the “chaff” and only bring people in to evaluate the last 2% of images that have bubbled to the top from our deep learning program. This means that this next phase should be far more interesting for both our veteran and new volunteers (aka “dusters”) since the fraction of images with craters should be much higher than it was previously. Or at least, the fraction of interesting, possible crater objects to look at will be higher than previously. 🙂
We are also hoping to improve the computer algorithm in the future. When the computer selects an image as having a crater, but several people say it has no crater, then the computer can adjust itself to do a better job the next time around. In this way, we expect to keep the computer humming behind the scenes and continuously improving its output throughout Phase 2.
For now though, please note that we have told the computer to include any images that it even remotely suspects could have a crater. Therefore, as you progress through the images, you will still see many without craters. However, we choose to err by including more images rather than fewer, so that we will be more likely to catch any real craters that may be flying under the radar.
We look forward to having you help us finish off our current batch of foil images and hopefully find some more interstellar dust. Thank you so much for participating, and if you’re ready to begin, click here!