Self-replicating nanobots

Current Job Listings

This is the second of two articles looking at nanotechnology as a future technological risk.

One of the scariest issues I can think of with respect to nanotechnology is self-replicating nanobots. The prospect of loosing little machines that can copy themselves without specific external control raises the specter of "The Sorcerer's Apprentice" and Mickey Mouse in that role in Disney's 1940 animated film, Fantasia.

For science fiction fans, there are many examples of self-replicating machines to titillate or terrify; the replicators in the Stargate universe come to mind.

In the information assurance field, it is pretty well established that creating self-replicating code, even for the best of intentions, is a bad idea; the fundamental problem is that no matter how carefully one applies quality assurance and testing to such code, external conditions are inevitably more variable than anything that can be tested in a finite time. Just think about all the combinations of operating system versions, update levels, application software, versions of that software, configuration combinations for all of the above, and run-time variations in when and how code segments are executed. For a classic and thorough review of the arguments, see Vesselin Bontchev's 1994 paper, "Are 'Good' Computer Viruses Still a Bad Idea?" which actually concludes that they could be a good idea (I still disagree, but it's a good paper).

From a biological perspective, I'll just mention that the replication of the nanobots will depend on the stability of the replication instructions. Replication may be based on something similar to the RNA/DNA/ribosome/protein model – instructional code interpreted by productive machinery equivalent to ribosomes. It could also involve something similar to crystallization and protein folding, where the structure of the nanobots itself leads to replication. In either case, random variations could cause both non-functional and possibly unexpectedly newly-functional versions of the original models.

From a statistical perspective, if there are lots of the little buggers replicating, then the statistical law of decreasing reliability comes into play. If p = probability of replicating one nanobot with an error, then (1 – p) is the probability of replicating one nanobot without an error. If there are n nanobots replicating independently, the likelihood that all of them will replicate without error is thus (1 – p)^n. But then the likelihood that at least one of them will replicate with error is 1 – (1 – p)^n. And that function rises rapidly as a function of n.

And we haven't even begun to consider deliberate attacks on the nanobots. Can you imagine how much fun criminal hackers are going to have interfering with the latest nanobots from some big corporation? Never mind vandalizing Web sites (see slides 40-52 in my PowerPoint lecture file on the history of computer crime): imagine integrating obscenities and slogans into the very fabric of, say, a new car or an office building being constructed by nanobots. For that matter, if we are dealing with futures, imagine hacking the code for nanotechnology based sentient beings.

Before we loose self-replicating nanobots on the planet, I sure hope we pay attention to the fundamentals of security.

SPECIAL REQUEST: If you like my columns, please support the Semper Fi fund to help wounded US Marines. Give online to support Norwich University student Zach Wetzel in his fund-raising marathon run.

Learn more about this topic

Nanotech will be Focus for Future Criminal Hackers

Top universities for micro and nanotechnology

Groups call for environmental regulation of nanotechnology

Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
Now read: Getting grounded in IoT