Sam's blog

Day 095 - Colonizing the Galaxy

Submitted by Sam on 24 August, 2011 - 02:49

If we do ever encounter extraterrestrial intelligence first hand, then it seems most likely to be machine intelligence rather than a biological intelligence like us. Biological beings are simply not robust enough (being too huge and too squishy) in comparison to purpose-built engineered solutions for crossing interstellar distances. It seems likely that any space-faring civilization, including our own, will eventually reach a level of sophistication where it becomes more effective and more economical to send intelligent machines to colonize and explore the universe than it would be to send spacecraft built to sustain bulky, vulnerable biological life.

A civilization in possession of advanced nanotechnology and equipped with nano-scale universal constructors could in theory create self-replicating robot spacecraft that would be highly efficient at galactic exploration, moving from planetary system to planetary system to source the materials to build copies of itself. If the planetary system was anything like our own solar system, then the replicators would have a wealth of raw material to work with, harvesting asteroids, comets, dust and planets as appropriate. This type of machine is known as a Von Neumann probe, and is named after the mathematician John von Neumann who was one of the first to develop a mathematical theory of how machines could make replicas of themselves.

Equipped with a basic propulsion system and carrying a payload as small as a nanoscale self-replicating constructor, these probes could colonize the entire Galaxy in only 4 million years. Directed by on-board programming, perhaps even an artificial intelligence powered by nanocomputers, the probes would replicate in each new system, sending some copies further into the Galaxy and instructing others to explore the current system and make scientific observations to be transmitted back to the home planet. Equipped with a universal constructor, some probes could be programmed to terraform suitable planets, creating life-sustaining environments to be 'seeded' with a biological payload (which could either be synthesized from the molecular level up using genomic instructions stored on-board, or alternatively derived from stored frozen embyros) to create a living colony of biological pioneers tended by artificial sentinels. This process would repeat with exponential rapidity to create swarms of trillions of self-replicating probes to allow a race to explore and colonize a vast area of space in an astonishingly short amount of time.

As with the risk of a replicator-induced earth-bound grey-goo scenario, Von Neumann probes carry the threat of dramatic misfirings which could consume entire planetary systems in obeyance of their directive to reproduce. Whilst artificially intelligent probes would be resistant to error, there is always the risk of mutation in the replication process, just as there is mutation in biological reproduction. A cosmic ray could cause a misalignment of the atomic architecture of the probe during its construction, creating a mutation that would eventually evolve a new “species” of probe, potentially with a different interpretation of its programming. All it would take is one of the trillions of probes to malfunction for the Galaxy to be threatened by a technological cancer.

Day 094 - Rare Earth

Submitted by Sam on 23 August, 2011 - 00:11

Most arguments for the existence of extraterrestrial life rest on the 'principle of mediocrity', which states that the properties and evolution of our solar system, including the processes that led to life on Earth, are not unusual in any important way and could be common throughout the Universe. The principle of mediocrity's counter argument is the 'rare earth hypothesis', which concludes that all of the conditions which conspire to permit complex life on Earth are exceptionally rare. The rare earth hypothesis has been presented as a plausible solution to the Fermi paradox, and an application of it concludes Stephen Webb's book Where is Everybody?.

In order to support life as we know it, a star system needs to be located in a very specific segment of the galaxy. It must be close enough to the galactic centre that its star contains a certain proportion of heavy elements, which are crucial for the formation of rocky planets containing the molecular components for life's building blocks. Additionally, the planetary system needs to be located far enough away from the dangerous centre of the galaxy, away from its high levels of radiation and its supermassive black hole, so that carbon-based life can develop. This is known as the galactic habitable zone, and it may only encompass around 20% of the stars in our galaxy; the other 80% are exposed to conditions that make the evolution of complex lifeforms unviable.

Of the stars in the galactic habitable zone, perhaps only about 5% are like our sun – stable, not too bright and not too dull, and so we need only consider planets orbiting sun-like stars in our search for complex life in the form that we know already. And then of these planets, we need only consider those with circular orbits which allow them to remain in their star's continuously habitable zone for billions of years at a time – that is, those planets without erratic orbits and which are close enough to their star to maintain liquid water on their surface for large, continuous periods of time. As a star's habitable zone moves outward as it ages, as the star grows hotter and brighter with time, a planet needs to be in a very particular orbit to avoid either permanently icing over or burning up over the course of the billions of years required for complex life to evolve. Perhaps, as Webb suggests, as few as 0.1% of the remaining planets orbit this continuously habitable zone.

Now, with the conditions on these remaining planets set to support the kind of carbon-based life that we know here on Earth, we have to contend with the probability that life will evolve. In itself, this could be an exceptionally rare event, or it could be a probable occurrence whenever conditions permit. We don't know, but if we say that the chance of life evolving now that everything is prepared is 0.05, then there are half a million planets in our galaxy supporting life.

Whittling the chance that these planets will evolve intelligent life will be the disasters and mass-extinctions that life cannot recover from. Asteroid impacts, global glaciation and supervolcanoes are potential culprits, and may account for up to 20% of all life-bearing planets permanently losing their inhabitants, or at least preventing the formation of complex multi-cellular life. In fact, the evolution of eukaryotic cells (those containing complex structures like mitochondria or chloroplasts) from prokaryotic cells (those without a nucleus or other complex structures) took many millions of years on earth, and is by no means necessarily inevitable. It may not happen on many of these worlds at all, and they may never see multicellular life. Webb estimates that one in forty life-viable planets might have the conditions that conspire to permit multicellular life to evolve from single-celled ancestors, but he cautions that this (like all of the other variables) is merely a guess.

Finally, in order for these planets to bear the kind of advanced, abstractedly intelligent civilizations that could have developed the technologies to contact or visit us, Webb believes that the remaining planets need to have the conditions necessary for complex language to develop – the crucial enabling step which allowed our species to truly master technology. This final criterion is perhaps sufficiently unlikely to evolve that it has only ever happened once, here on Earth. In this scheme, there are still planets in our galaxy where life is common, but it is frequently only unicellular, rarely multicellular, and uniquely rarely intelligent. This model of the universe leaves us with many fabulously diverse extraterrestrial ecosystems to explore, but ultimately alone as an advanced intelligence. It neatly explains why we have never heard nor seen an alien.

Day 093 - Nested realities and the afterlife

Submitted by Sam on 22 August, 2011 - 00:07

What are the chances that we're not in a simulation? We would have to assume that no civilization in the history of the universe – no individual or group from any civilization at any time – would ever have the resources or the inclination to run a simulation of our own universe. As it breaks no fundamental laws of physics to grant that a civilization with a steady rate of technological progress will one day attain the computational power necessary to run such a simulation (in fact, barring a mass-extinction and given sufficient time, it seems something of an inevitability), we would have to assume that either no civilization in the history the universe ever attains this level of computational resource, or no civilization ever achieves the ability to programme such a simulation, or that all sufficiently endowed civilizations choose not to.

Speculation about what a civilization trillions of times more advanced than ours would want to do is unavoidably moot, but at a minimum we can safely say that it would once more break no physical laws for a simulation to be programmed; it is perfectly feasible in theory that given astronomical intelligence and a sufficient amount of time a perfect simulation of our perception of our universe could be programmed and run. And so we are left in the position that either no civilizations run human-simulations because they can't, or because they have all converged to reliably prevent their computational resources from being used to do so. Perhaps all advanced civilizations throughout the universe all happen to enforce laws or codes of ethics or other prohibitions which successfully forbid the running of human-universe simulations. Or perhaps they all lose interest in doing so, having moved on to loftier goals with more scientific or aesthetic value.

If these conditions don't hold, then it is likely that we're in a simulated universe, and thus it is likely that we will develop the technology to simulate a universe ourselves. Perhaps before we succeed our simulation will be terminated, in which case long-term planning is futile. But perhaps we will be permitted to run our own nested-simulation, much as a virtual computer can run inside a real computer today (and then a virtual computer inside that...). If this possibility became a reality we would have to conclude that the probability that we were living within a simulation was high, and furthermore we would be encouraged to suspect that our simulator's universe was itself a simulation as well, and so on.

The theological overtones of the simulation argument have not escaped notice. Each simulator would in many ways occupy a Godlike relationship to their simulated humans; they would be omnipotent in that they could pause, modify and re-run the simulation; they would be omniscient in that they could monitor everything in the simulation; and they are necessarily sole creator and destroyer of the whole universe.

In fact, in a world of nested simulators, the simulants can reasonably infer there is the possibility of an afterlife, and that their simulators have the power (but perhaps not the inclination) to judge, reward and punish actions by some ethical standard. Perhaps each person's life is recorded by the computer running the universe, and can be restored as a whole into a new simulation at will after death. Further, simulators could theoretically upload your consciousness into an artificial body to allow you to interact in their own universe (whether simulated or otherwise). In order to increase the chance of being resurrected by simulators, a reasonable strategy would be to strive to increase the likelihood of being preserved by being interesting with the aim of catching the eye of the simulators. Recurrently catching the eye of ever higher hierarchies of simulators could perhaps result in being born into the “basement-level”, the real universe.

Day 092 - Is it real?

Submitted by Sam on 21 August, 2011 - 02:01

If an advanced alien civilization has enough computing power to create a virtual reality sufficiently diverting to negate their need or desire for galactic colonisation, it follows that such a civilization would have the computational capacity (and coding skill) to create a simulation for and of us. Oxford University professor Nick Bostrom has estimated that a computer with the mass of a planet, operating with the efficiency of what we know to be theoretically capable with nanotechnology today (which is probably not even approaching the physical limits of optimal computation), would be able to perform 10^42 operations per second, and thus able to simulate the entire 'mental history' (or every thought, feeling and memory) of humankind in less than one millionth of a second. And a sufficiently transcendently advanced civilization may have a huge number of computers of this scale.

In fact, as Bostrom has argued, we don't need to postulate superintelligent extraterrestrials in order to suspect that we might be living in a computer simulation – all we need to concede is that at some point in the future our own descendants may reach a level of technological sophistication that permits such a complex virtual reality simulation to be programmed and run. And our descendants may have a clear motive for simulating us, unlike our hypothetical extra-terrestrials – it may be that later generations of humans wish to turn their phenomenal supercomputers into detailed simulations of their own past, to understand more about their forebears. With this goal, they could set about creating fine-grained simulations of the physical world and historic human brains to create a believable virtual-universe, peopled by conscious simulants (us?) unaware of their artificiality.

But how highly resolved does this virtual reality have to be? Certainly, our experience of the world seems unfathomably rich, covering a spectrum where our experiments into quantum permit us to look at the very small, and our space programmes permit us to look at the very distant and the very large. The beauty of a simulation though, is that it can cheat. It doesn't need to persistently model the microscopic structure of the entire observable universe, but merely those parts necessary to ensure that the simulated consciousnesses don't perceive any irregularities. Whenever a human looks into microscopic phenomena with an electron microscope, or peers into the vastness of space with a radio telescope, the observed details can be filled in by the simulation on an ad-hoc basis, rendering only what is necessary for each human to remain unsuspicious, whilst leaving the unobserved universe unresolved.

In a completely unrelated (and incidentally unverified) demonstration of this principle of on-the-fly rendering to conserve computing requirements, an Australian software company called Euclideon Pty Ltd, has claimed that it has developed a new kind of computer graphics technology that allows a graphic environment to be created from an essentially infinite number of virtual 'atoms', rather than a (very) finite number of traditional polygons, by using a search algorithm to work out which of these atoms need to be rendered to generate any given scene from the viewer's perspective at any given time. The rest of the millions of atoms can go unrendered in the background, and are only called into view when they are needed. This new kind of technology, if it is verified, will allow graphics of unlimited detail, limited only by the granularity the artist and modeller work to.

And so extending this 'show only what is needed' principle, a hugely advanced human or alien civilization could cut a lot of computational corners and still create an utterly compelling (and indistinguishably realistic) simulation of all of human experience, requiring 'only' around 10^36 operations to simulate the brains of 100 billion humans, with the environmental rendering as an additional cost. With an excess of computational power and programming ability, a simulation could keep track of the brain-states of every conscious human, and modify them and/or the simulated environment whenever needed to maintain the integrity of the illusion – for instance, filling in worldly detail whenever a human were about to make a microscopic observation. Should any errors ever occur, the software of the simulation could be paused, modified and re-run, and no-one need be any the wiser.

If a civilization possessed planetary-scale computers, they would have enough computational power to run such simulations many, many times over, while only using a fraction of their total computational resources. Bostrom realized that if there is a substantial chance of a civilization ever developing such a capability, be it our children making an ancestor-simulation or an alien civilization making a virtual human zoo, then the number of simulated humans ever to have existed is likely to outweigh the number of biological humans ever to have existed, making it more probable that we are living in a simulation than living as the 'original' biological template for one.

Day 091 - Goodbye universe, hello virtual reality

Submitted by Sam on 20 August, 2011 - 00:10

One of the most interesting partial solutions to the Fermi paradox hypothesizes that we haven't been visited by alien civilizations – and nor should we expect to hear from them – because they do not travel, colonize or have very much engagement with the physical universe at all, having engineered Matrix-esque virtual realities for themselves that are far more compelling and fulfilling than the real thing.

This scenario returns to an earlier topic of the blog, and imagines that the universe is indeed described by a small set of laws, and that sufficiently advanced civilizations will tend to discover them, eventually decoding rules for all phenomena in the universe (including themselves). At such a peak of understanding, such a society would essentially find that its science was complete; its physicists would have found the theory of everything, and there would be nothing left that could not be explained. To be clear: they would have discovered the origins of life, understanding the exact chemical conditions that gave birth to their ancestors, and the full range of alternative conditions that could give birth to all other lifeforms throughout the universe. All of their astronomers' observations about the universe at large would have been arranged into an infallible theory of knowledge, and they would have models explaining the exact origins and ends of time and space, and everything in between.

Having unlocked the secrets of this universe, perhaps without even leaving their own solar system, such a race could readily determine that further space exploration would be unnecessary, and turn instead to a self-serving and insular virtual reality. With the immense computing power available to such a highly developed civilization, such a simulation would be utterly compelling. Their artificial realities could create universes for them more rich, more sensorily stimulating and more complex than our own.

This speculation, again, cannot solve the Fermi paradox unless it is a sociological condition that applies to every advanced civilization, but it does seem to me to have a rather compelling logic. Even without a retreat into virtual reality, an extraterrestrial civilization in possession of a workable theory of everything could very easily be imagined to conclude that interstellar travel is too costly, too difficult or ultimately too pointless to merit the effort, and hence why we have never heard from anyone else out there. Perhaps it is every civilization's destiny to ultimately slake its own curiosity so fully that its continued exploration and participation in the physical universe itself becomes at best unnecessary, and at worst too predictable to contemplate.

Day 090 - Aliens and dinosaurs

Submitted by Sam on 19 August, 2011 - 00:44

Extinction events like nuclear war or nanotechnological catastrophe could undoubtedly shorten the communicating phase of an advanced extraterrestrial civilization, but these kind of global mass extinctions can't always be the end of the story, as the Cretaceous-Tertiary mass extinction has already shown here on earth.

Our own mas-extinction event, which occurred roughly 65 million years ago, wiped out most of the life on earth, but crucially, not all of it. It is marked by a thin geological boundary separating dinosaur fossils from all other fossils, and is widely theorized to have been caused either by an asteroid impact or by increased volcanic activity which disrupted earth's ecology so massively that most living things were wiped out forever.

Some species survived the catastrophe, however, allowing life on earth to continue evolving, finally producing human civilization. That some of these species were reptiles as large as crocodiles suggests that the KT extinction event was not perhaps as all-powerful as a man-made (or alien-made) mass-extinction would be, but it nevertheless raises an important point. This is that life is hugely adaptable, and that even if the most 'highly-evolved' species on the planet is driven to extinction, there is more than likely always another to take its place.

Indeed, in the case of nuclear holocaust, mankind or an alien race could quite easily obliterate itself entirely, and yet leave a thriving new ecosystem of lesser lifeforms behind. In this instance, lifeforms adapted to resist heat and radiation would continue to live on long after the extinction event, able to continue evolving over many generations to produce higher forms of life once more. Insects are good candidates as survivors of a man-made extinction event that we could rely on to perpetuate life on a ravaged earth, perhaps one day evolving once more into a society capable of dropping atomic bombs again.

If not insects, then there are extremophilic bacteria which thrive in conditions that would be hazardous to all other life. In fact, some bacteria are well-suited to a whole range of hazardous environments, and could survive and flourish even in the harshest of post-apocalyptic worlds. The bacterium D. radiodurans is one such polyextremophile, and can live through huge doses of radiation (it has the unique ability to repair its own damaged DNA), and can survive in vacuums, acid, extreme cold and dehydration. T. gammatolerans is even more radiation resistant still.

So, even if every intelligent civilization always destroys itself, there will almost always be organisms left after the extinction event to continue evolution. And if intelligence is a logical development of evolution, then after a few hundred million years (which is not very long at all on the galactic timescale) a new civilization will emerge capable of broadcasting its existence to the universe, and, incedentally, capable of destroying itself once more. And yet despite the possibility for cyclical extinctions and rebirths of extraterrestrial civilizations, we still have not detected even the faintest hint of another intelligence in our universe. If self-extinction cannot provide a universal answer to the Fermi Paradox, then where is everybody?

Day 089 - Aliens and auto-extinction

Submitted by Sam on 18 August, 2011 - 01:12

Recognition of our new-found ability to annihilate our planet through our own technology has lead to counter-technology protests, mass demonstrations and international movements to limit our ability to destroy ourselves, manifest principally through voluntary nuclear disarmament. The threat of accidental mass extinction posed by nuclear, biological and chemical technologies has motivated terrorist attacks targeting groups perceived to perpetuate development of these areas. Terrorists like the Unabomber, Ted Kaczynski, have called for revolution against industrialized civilization and modern technology, advocating a return to a non-civilized state, partly in an attempt to defuse the potential timebomb of technological global threats. Most recently, a terrorist group in Mexico attacked two robotics researchers with specialities in nanotechnology with mail bombs. The group, whose name can be translated as the 'Individuals Tending to Savagery', has published a manifesto expressing fears that nanotechnology will result in nano-bacteriological war or an explosion of nano-pollution that will destroy life as we know it.

If the threat of mass-extinction can come from so many technologies in so many diverse ways, whether by accident or by design, then it follows that the more developed a civilization becomes the greater its risk of wiping itself grows. This trend has been suggested as a possible solution to the Fermi Paradox, which describes the contradiction between the current predictions for vast numbers of potentially suitable environments for life in the Universe and the lack of any evidence we have for their existence. Given the age of the Universe (more than thirteen billion years by current measurements), and given the millions of stars in our galaxy – not to mention the millions of galaxies in the universe – it seems statistically highly probable that there are vast numbers of life-bearing planets beyond our own. Of these planets, it also seems highly probable that a fraction will develop into intelligent civilizations, and that of these a further fraction will prosper and develop technologies that will leave some kind of detectable signature, be it through alterations to star systems through space colonization or merely through the development of technologies with effects which are observable from a distance, like radio emissions. But to date, no such artifacts of alien life have been identified, and hence the Fermi Paradox.

For some, part of the reason we have been unable to find any signs of extraterrestrial life is that their 'window' of communication, of signalling their technologically developed state to the universe at large, would inevitably be closed prematurely due to some kind of unavoidable technological holocaust, like the nuclear annihilation humanity has teetered on in the past, or like the nanotech apocalypse that some fear today.

Day 088 - Dangers of nanotechnology

Submitted by Sam on 17 August, 2011 - 01:11

Alongside the catalogue of beneficial uses of nanotechnology, there are very clear dangers associated with its unregulated proliferation. Like the powerful technologies which have realized weapons of mass destruction, nanotechnology has the potential for unimaginable destructive abuse and catastrophic accidents, but with a crucial difference. To successfully weaponize nuclear, biological and chemical technologies access to highly protected information, rare raw materials and expensive equipment is typically required, effectively limiting their use to states and large groups. When nanotechnology proliferates, which it will do with exponential rapidity if it ever does, its immense destructive power will be well within the reach of individuals and small groups. With nanofactories, appropriate design specifications and the commonest of raw materials, individuals in their homes would be empowered to make mistakes so dangerous and weapons so potent that the extinction of all life on earth would become a real possibility.

Like nuclear technology, nanotechnology has very clear utility for militaries and terrorists, and it seems likely that in its longterm, its destructive uses may well be far easier to realize on an irreversible scale than its constructive uses. Nano-scale self-replicating machines have been a widespread concern in this respect, as they could conceivably rapidly and exponentially reproduce, being too tough and numerous to stop. This has been popularized as the nanotech 'grey goo' doomsday scenario, whereby the out-of-control nanorobots – whether accidentally evolved or purposefully designed – replicate to consume all resources on earth. However, nanoscale fabrication has no explicit need for self-replicators, as they would be needlessly complex and inefficient for molecular manufacturing when compared with highly optimized nanofactories, which are essentially self-contained and present no threat in themselves.

Despite this market pressure against the creation of self-replicators, given a long enough timescale, a large enough population and an easy access to nanofabrication, the threat of a weaponized or mutant variant cannot be discounted; all it takes is one to be made or evolved, and without effective and well-prepared safeguards the entire planet could be consumed within days. The only barrier to this potential destruction will be the distribution of knowledge of how to build such a thing. With the model of open-source software as a perverse example, it seems that such a barrier will always eventually be porous, as motivated individuals will always find a way to emulate those things which are otherwise in restricted domains. Again, given a concerted collaborative effort (consider a covert nanoweapon wiki) over a long enough period of time the potential for disaster is grave.

Day 087 - Nanotech promises

Submitted by Sam on 16 August, 2011 - 00:09

There are immense consequences to molecular nanotechnology advancing to the level of functional programmable molecular assemblers and productive nanofactories, the implications of which have been studied, debated and systematized since Eric Drexler published his seminal book on nanotechnology, Engines of Creation, in 1986. Indeed, the Foresight Institute, which Drexler co-founded in the same year as its publication, has been working steadily to promote awareness of transformative technologies like nanotech, and to enhance knowledge and promote the critical discussion which will ensure that these technologies are put to safe and beneficial use.

And the beneficial uses of nanotechnology are manifold. In its mature form, molecular assembly will slash the cost of manufacturing every type of product, reducing the cost of producing even today's most expensive commodities, like computer chips, aeroplanes, missiles, surgical tools etc., towards a bottom limit which will be set by the cost of the raw materials and energy, amounting to only a few pennies per kilogram. The only significant cost will be an initial expenditure in the design of the products, because as soon as a blueprint exists molecular assemblers will be able to churn out enough copies to satisfy any demand very, very quickly. This development has the potential to utterly upset our current market structure, rewrite all intellectual property laws and regulations, and disrupt the world economy like nothing before.

Not only will nanotechnology greatly reduce the cost of manufacturing today's products, it will allow us to improve and upgrade them with new nano-materials, many times stronger and lighter than anything we have in existence today. These new materials will open up possibilities for further new technologies that we cannot possibly predict, and enhance those that already exist beyond recognition. Inexpensive, and radically strong and lightweight materials will very literally expand the frontiers of human endeavour, enabling widespread space exploration as cost- and energy-efficient spacecraft become a commonplace.

Nanotechnology promises to fuel such spacecraft cleanly. It will make renewable energy viable, allowing molecular solar cells to be manufactured on such a widespread scale that they could be used to coat roads and roofs everywhere, providing enough clean energy to satisfy the entire world's demands. Indeed, the nanofactory manufacturing revolution promises the greenest of futures, not only solving the global energy crisis but also removing the root cause of much of the world's pollution itself, eradicating all traditional industrial processes and their hazardous by-products and chemical pollution. Nanofactories will work with a molecular efficiency designed to produce no pollutants at all.

On top of these industrial and environmental potentials, nanotechnology also carries the promise of a completely new type of medicine – one that could potentially cure all diseases and stop and reverse ageing entirely. Medical nanorobots could repair and defend bodies on the cellular level, performing molecular surgery to repair our own biological nano-machines and destroy cancer cells.

Day 086 - Molecular assemblers and nanofactories

Submitted by Sam on 15 August, 2011 - 02:30

The same principles which apply to everyday mechanical engineering can be implemented on the nanometer scale, allowing hypothetical systems to combine atoms and molecules to create atomically precise components that can in turn be joined together to make macroscopic wholes, just as factory production lines today assemble (much less precise) pieces into large, complex objects.

Unlike factories today, machine movements at the nanometer scale are millions of times faster than our current 'human-scale' processes, and will allow hugely complex products – like billion-core laptops – to be assembled in hours or days. These molecular machines will be able to bond virtually any combination of molecules together in any stable pattern, adding a few atoms at a time in a structured three-dimensional layering approach to produce machines and products that are stronger, lighter, more efficient and smaller than anything our traditional manufacturing processes could ever achieve.

The basic components of the nanofactories which will be capable of this universal engineering are molecular assemblers, or fabricators. They will be able to tolerate freezing or boiling, acid or vacuum, and will let us build anything that the physical laws of the universe allow to exist, limited only by our own ability to design. Nanofactories would contain trillions of these molecular assemblers, arranged into an orderly array, each working on a minute fraction of the finished workpiece, a nanoblock, passing the finished pieces along a nanoscale assembly line – made from molecular conveyor belts, molecular pulleys and grabbers – to be joined together by increasingly large assembly stations.

The nanofactories will be built from a highly modular, repetitive and scalable architecture, and will be able to fit onto a tabletop. Once the first nanofactory has been built, it could be programmed to build another nanofactory in a matter of days, and then nanofactories could proliferate exponentially until the world's demand was satisfied. With the correct designs, enough power, and with enough chemical feedstock (the raw materials that the nanofactory uses to synthesize its products), these factories will be able to build anything we can imagine.

Syndicate content
Drupal theme by Kiwi Themes.