Skip navigation.
Write - Share - Read - Respond

Exploring the Galaxy in 1 Month

Paul B. Hartzog's picture

In his book Is There Life on Other Worlds, Poul Anderson makes a point about how long it would take to explore the galaxy. The "isolation" argument is usually used to explain why we have not been visited by extra-terrestrial life, i.e. it would take so long that this can be taken as a reason that we have not encountered any aliens so far (to our knowledge). However, the argument contains a flaw, which is that the exploration, or the growth of knowledge about explored systems, is linear.

But the galaxy can be thought of as basically a big network, and search in this network could proceed in parallel. A swarm of Von Neumann self-replicating machines could do the trick. (A self-replicating machine uses locally available materials to produce an exact copy of itself). In fact, in his 2006 Hugo-award-winning book Spin, Robert Charles Wilson suggests exploring the galaxy by using a swarm of nanotechnology robots that self-replicate.

So I got to thinking about just how long this would take. Turns out it would take at most two months and probably closer to one month. Here's why.

By measuring the total amount of light in the galaxy and dividing by the estimated mass distribution, we can calculate the number of stars that are there in the galaxy. So, even though we cannot actually count them, a reasonable figure for the number of stars in the galaxy is roughly 100 billion (100,000,000,000). Some estimates go up as high as a trillion.

You would start with one robot, then have two, then four, then eight, and so on as each robot replicated and then those two robots spread to the next two stars. So your exploration would increase not linearly (n), or even exponentially (n^2), but geometrically (2^n).

So, how long would it take to explore 100 billion stars? 37 - 40 days.

2^37 = 137,438,953,472
2^40 = 1,099,511,627,776

Even if you allowed for significantly higher estimates, you still would only need a few more days because the territory explored is doubling at every step. Furthermore, there is no reason to start with just one robot. A small swarm of them would start us out on the "powers of 2" list at a higher value. The difficulty would be getting the robots to start in different places (presumably they would all start from Earth).

Now, of course, this is science fiction, and isn't meant to be taken at face value. In actuality, the galaxy is about 100,000 light years in diameter, and about 1,000 light years thick, so even if our nanobots traveled at the speed of light, it would take them at least that long to get to the other side of the galaxy. Nevertheless, my point here is that all previous estimates of galactic exploration need to be revised from linear calculations to geometric ones.

There is a lot more interesting speculation around this idea on the Wikipedia entry on Self-replicating spacecraft.

kelson.philo's picture

That's some fine

That's some fine extrapolation, there, Paul. I'd like to throw in another hundred million number we're all familiar with because we use them all the time, well, we're supposed to at any rate, and that's the ol' brain. At 100 million cells (average, and what a nice coincidence, no?), it's the fastest known parallel processing entity known.

Now, to be sure, it didn't grow at a geometrical rate, at least in terms of days, but nine months isn't bad, and, as we are seeing from 20/20 hindsight, even that rate of replication can lead to some serious consequences for the environment our particular brand of replicator is in right now.

But our replicator didn't have to travel light years, a sizable step up in scale. So, the main impediment is going to be getting stuff to those stars in the first place, and then getting a signal back from them. Now, what to do about that?

We've got all sorts of macguffins in SF, of course. I personally like quantum entanglement allowing things like le Guin's ansible and Stross's quantum dots. But what if you wanted something different? What process would you use to get the probes to the stars?

paulbhartzog's picture

Jump limits

Excellent question.

I was thinking in terms of hyperspatial jumps of limited range. So, for example, say you could only jump 10 light-years or less, but within 10 light-years you could effectively ignore all travel time.

Then you would have a scenario where you cannot leap across the entire galaxy all at once, but you CAN have a network-based spread effect. If you wanted that as a plot device or setting element, that's one way to do it.

kelson.philo's picture

Hmmm..that could make things

Hmmm..that could make things most interesting. If your initial Bot has a range of ten Ly, then it's effective volume is given by a ten Ly radius. To hop across the galaxy would take, what, ten thousand hops? That's some nice perspective. You'd also run into interesting optimalization problems, trying to fill the volume of the galaxy most effectively, given that your total 'hoppage' is going to be a rather high number, including how fast it takes the probe to make a copy of itself.