Menu

Interviews

Lifeboat Foundation: Mr. Freitas, what is the latest thing you’ve been working on?
 
Robert A. Freitas Jr: My professional goal for the last two decades has been, and continues to be, to help make life-extending medical nanorobotics technologies happen as fast as humanly possible. Over the last several years, I’ve been spending most of my time mainly in two areas.
 
First, I’ve continued to develop concepts, designs and analysis for the last two books in my four-volume Nanomedicine series. This includes creating new designs and missions for medical nanorobots, analyses of nanorobot control theory, and collaborations on various other nanomedicine-related projects, ranging from nanorobot animations with talented artists to nanorobot technical studies with several eager young PhD students.
 
Second, I’ve been trying to figure out how to build diamondoid nanorobots, starting from current manufacturing technologies. This necessarily involves researching methods of positionally-controlled atomically-precise fabrication, particularly diamond mechanosynthesis (DMS), using ab initio quantum chemistry simulations, and trying to push forward the development of diamondoid nanofactories as fast as possible. The effort has included the creation (with Ralph Merkle) of the Nanofactory Collaboration that involves the establishment of working collaborations with computational theorists and scanning probe experimentalists around the world as a foundation for a practical nanofactory development project.
 
So, with this background, what’s new?
 
In the first (nanomedicine) area, culminating 5 years of intermittent effort I’ve finally finished my latest theoretical scaling study of a new diamondoid medical nanorobot called the “chromallocyte”. This is the first full technical description of a cell repair nanorobot ever published. The nanorobot design addressed in the paper is a very important one — it is perhaps the key nanorobotic system for anti-aging and life extension applications.
 
Quoting part of the abstract:
“The ultimate goal of nanomedicine is to perform nanorobotic therapeutic procedures on specified individual cells comprising the human body. This paper reports the first theoretical scaling analysis and mission design for a cell repair nanorobot. One conceptually simple form of basic cell repair is chromosome replacement therapy (CRT), in which the entire chromatin content of the nucleus in a living cell is extracted and promptly replaced with a new set of prefabricated chromosomes which have been artificially manufactured as defect-free copies of the originals. The chromallocyte is a hypothetical mobile cell-repair nanorobot capable of limited vascular surface travel into the capillary bed of the targeted tissue or organ, followed by extravasation, histonatation, cytopenetration, and complete chromatin replacement in the nucleus of one target cell, and ending with a return to the bloodstream and subsequent extraction of the device from the body, completing the CRT mission….”
The title of the paper is “The Ideal Gene Delivery Vector: Chromallocytes, Cell Repair Nanorobots for Chromosome Replacement Therapy” and it is currently in press at the peer-reviewed Journal of Evolution and Technology (and is soon to be available online).
 
In the second (nanofactory) area, in February I completed the core of a major three-year project (with Ralph Merkle) to computationally analyze a comprehensive set of DMS reactions and tooltips that could be used to build diamond, graphene (e.g., carbon nanotubes), and all of the tools themselves including all necessary tool recharging reactions.
 
So far we’ve defined a total of 53 reaction sequences incorporating 252 reaction steps with 1,192 individual DFT-based reaction energies reported. (These reaction sequences range in length from 1–13 reaction steps (typically 4) with 0–10 possible pathological side reactions or rearrangements (typically 3) reported per reaction.) The reactions have been laid out in tables and systematized.
 
The cleanup work on this material should be finished in a month or two, after which we can prepare the graphics, write the paper for publication in the peer-reviewed Journal of Computational and Theoretical Nanoscience, and ready our patent filing. We’re very excited by this work because it will be the first published paper to lay out a complete set of positionally-controlled diamondoid-building reactions, with all plausible unwanted side reactions analyzed, validated using good quality ab initio (DFT) quantum chemistry calculations. These reactions will form the core of our roadmap to develop diamond mechanosynthesis along a direct path that leads, ultimately, to the design and construction of the first diamondoid nanofactory.
 
LF: You are one of the few scientists working on molecular assemblers, and cofounded the Nanofactory Collaboration project. If you had $1 million/year, how long do you think it would take your team to develop a working molecular assembler?
 
RF: We’ve been trying to put some numbers to this over the last year or so, working from the (perhaps unrealistic) assumption that the funds would be spent in a completely focused manner toward the goal of a primitive diamondoid nanofactory that could assemble rigid diamondoid structures involving carbon, hydrogen, and perhaps a few other elements. Very roughly, our latest estimates suggest that an ideal research effort paced to make optimum use of available computational, experimental, and human resources would probably run at a $1–5M/yr level for the first 5 years of the program, ramp up to $20–50M/yr for the next 6 years, then finish off at a ~$100M/yr rate culminating in a simple working desktop nanofactory appliance in year 16 of a ~$900M effort.
 
Of course the bulk of this work, after the initial 5 year period, would be performed by people, companies, and university groups recruited from outside the Nanofactory Collaboration. And it would be easy for the project to take twice as long and cost ten times more (or worse) if efforts are not properly focused.
 
The key early milestone is to demonstrate positionally-controlled carbon placement on a diamond surface by the end of the initial 5 year period. We believe that successful completion of this key experimental milestone would make it easier to recruit significant additional financial and human resources to undertake the more costly later phases of the nanofactory development work.
 
If there are no major technical hitches, we estimate that an outlay of about $5M over a 5 year period could complete Phase IA: the ability to perform primitive diamond mechanosynthesis and to build very simple diamond structures composed of carbon and hydrogen using a vacuum-based (UHV) scanning probe-type experimental apparatus — though probably not terribly reliably, at first.
 
We can provide a few more details to any long-horizon entrepreneurs who are seriously considering an investment in such an effort. As noted earlier, we expect to be putting together a licensable patent portfolio to protect future economic value for potential investors and to guarantee our own unfettered access to the technology. A funding level of $1M/yr on a 5 year commitment would allow us to launch a program that would have a good chance of completing the Phase IA goal.
 
LF: Numerous writers have said it is likely than the first commercial nanofactories will use carbon-containing molecules as feedstock. On the Nanofactory Collaboration website, you state “the principal input to a diamondoid nanofactory is simple hydrocarbon feedstock molecules such as natural gas, propane, or acetylene.” What makes hydrocarbons preferable to other carbon-containing molecules, such as carbon dioxide from the atmosphere, or carbonate rocks?
 
RF: That’s a very good, fundamental technical question to ask. It’s true that any feedstock molecule containing carbon atoms can, in principle, be used as a source of carbon atoms for construction of diamondoid objects. But diamond is essentially a large hydrocarbon molecule, so it should not be surprising that chemically similar hydrocarbons are the most efficient precursor material.
 
Oxygen-rich carbon feedstock generally requires much more energy to convert to diamond than hydrogen-rich feedstock, and can also lead to significant amounts of waste products if there are lots of unused extra atoms in the feedstock. The Merkle-Freitas hydrocarbon assembler, the first zero-emissions (non-polluting) bottom-up replicator ever proposed, uses acetylene feedstock as its sole carbon and hydrogen source. Dealing with noncovalent feedstocks (e.g., ionic-bonded minerals such as calcium carbonate) presents additional complications.
 
The table below shows the net energy required to complete several mechanosynthetic reactions. Each reaction yields a single molecule of adamantane, the smallest possible chunk of diamond, among the products, starting from a variety of feedstock (reactant) molecules. A negative energy indicates that the net reaction is “exoergic” and readily moves downhill across the potential energy landscape, releasing surplus energy overall. A reaction with positive energy is “endoergic” and must be forced uphill by adding energy from outside. (For technical readers, reaction energies are calculated using Gaussian98/DFT at the B3LYP/6–311+G(2d,p) // B3LYP/3–21G* level of theory with uncorrected ZPCs, on fully converged structures with no imaginary frequencies except for CaCO3.)
 
Net Mechanosynthetic Reaction to
Produce an Adamantane Molecule
(C10H16)
Energy
(kcal/mole)
% C by wt % C by #
5C2H2+3H2→C10H16 –261.9 92.3% 50.0%
5C2H4→C10H16+2H2 –58.8 85.7% 33.3%
(10/3)C3H8→C10H16+(16/3)H2 +66.9 81.8% 27.3%
5C2H6→C10H16+7H2 +88.7 80.0% 25.0%
10CH4→C10H16+12H2 +172.5 75.0% 20.0%
10CO2+8H2→C10H16+10O2 +1,360.7 27.3% 33.3%
10CaCO3+8H2→C10H16+10CaO+10O2 +1,654.9 12.0% 20.0%

From the table, we can see that unsaturated hydrocarbon feedstocks have the highest carbon content per molecule, the best energetics, and leave behind the fewest post-reaction discard atoms as waste products when used to build diamond. These are the highest-quality feedstocks for diamond mechanosynthesis.
 
Employing saturated hydrocarbons of increasing chain length (CH4, C2H6, C3H8, …) as feedstock also somewhat improves net reaction energy. Note that using CO2 as the carbon source costs 8 times more input energy than for natural gas (CH4) feedstock, or 20 times more input energy than for propane (C3H8). Taking apart calcium carbonate minerals such as limestone, marble, calcite, or aragonite to extract their carbon content is even less energy efficient. But if you’re willing to spend the extra energy and create lots of waste products in the process, it could probably be done.
 
LF: What do you think are the first products that nanofactories will build?
 
RF: The first products will almost certainly be more nanofactories, nanofactory components and manufacturing tools, in order to ramp up total productive capacity as quickly as possible.
 
Once sufficient productive capacity exists, the nature of the next products to be made will be dictated by a multitude of factors such as: (1) how quickly the nanofactory can fabricate products, (2) the range of elements from which the nanofactory can fabricate products (hydrocarbons only, or other atoms?), (3) the size range of products that can be made, (4) the cost per kilogram of assembled products (early products using the first primitive nanofactories may still be extraordinarily expensive), (5) the utility of the products, (6) who’s paying for the R&D and holds the patent/licensing rights (e.g., private company, NIH, university, military?), (7) how much funding is available, and so forth.
 
But I think a good case can be made for medical nanorobots being among the early consumer products. That’s because:
 
(1) even relatively small (milligram/gram) quantities of medical nanorobots could be incredibly useful;
 
(2) nanorobots can save lives and extend the human healthspan, thus will be in high demand once available;
 
(3) manufacturers of such high value products (or of the nanofactories, depending on the economic model) can command a high price from healthcare providers, which means nanorobots should be worth building early, even though early-arriving nanomedical products are likely to be more expensive (in $/kg) than later-arriving products; and
 
(4) the ability to extract, re-use and recycle nanorobots may allow the cost per treatment to the individual patient to be held lower than might be expected, with treatment costs also declining rapidly over time.
 
LF: If nanofactories were invented in 2015, what regulations do you think will be put in effect on the products people can build? How will regulations be enforceable?
 
RF: This also depends on many factors which are presently unknown or are hard to precisely specify. Today’s legislation and private restrictions on licensing/use of, e.g., software and music, are certainly starting to explore the space of possibilities in the commercial realm. Other regulations will likely also be put in place to ensure public safety, as harbingered by the recently circulated “Draft Guidelines to Secure the Safe Performance of Next Generation Robots” in Japan. There will also be large numbers of taxes, fees, and surcharges imposed by governmental entities on nanofactories, both to mitigate public impacts and to increase tax revenues.
 
In a recent analysis of some of the basic economics of personal nanofactories, I listed many such government-imposed regulatory costs — and these costs, when coupled with insurance premiums and licensing fees imposed by private sector owners of the relevant intellectual property rights, will likely provide an irreducible regulatory cost floor of perhaps $0.50-$1.00 per kilogram on the price to end consumers of nanofactory-built products. That’s about as cheap as potatoes, but certainly not “free”.
 
LF: In Design of a Primitive Nanofactory, Chris Phoenix writes that a full-fledged nanofactory could probably be scaled up from reprogrammable self-replicating assemblers in mere months or even weeks. Do you agree with this assessment?
 
RF: This is an interesting speculation that should be examined further, but I’m very skeptical. It’s certainly true that the technologies needed to perform bottom-level diamond mechanosynthesis and the various as yet poorly-defined parts manipulation and assembly tasks required aboard a free-standing self-replicating assembler are probably subsets of the set of all technologies that will be necessary in a diamondoid nanofactory. But there are numerous additional technologies that will probably be needed for the successful construction and operation of a nanofactory that a simple self-replicating assembler would not require.
 
Just off the top of my head, a few of these additional technologies might include: (1) design and control of intricate trillion unit systems, where each unit has complex behaviors and physical interactions with other units that must be studied, prototyped, tested, and reworked at various levels of aggregation; (2) design and control of complex flowthrough pathways for feedstock, energy, information, waste intermediates, materials recycling, and so forth; and (3) analysis and design for system reliability including (a) redundancy analysis and design, (b) analysis and design of switching/handoff schemes among multiple alternate production lines, (c) analysis of the accumulation rate of dead production lines and the effect of such dead lines on system architecture and operation, (d) provision for parts testing and handling/reworking of reject parts and their buffer storage and rerouting through the system, and so forth. All of these things require designing, prototyping, testing, and reworking additional physical structures and component hierarchical organizations that are not needed in a single standalone assembler nanorobot.
 
It has been my experience that when you sweat the technical details, you start discovering all sorts of hidden roadblocks, detours, and needed workarounds/redesigns that were not recognized or anticipated from the outset. You’d be surprised at how many seemingly plausible diamond mechanosynthesis reactions turn out not to work so well upon closer inspection. I expect the universe to remain equally recalcitrant at all stages of nanofactory development.
 
Furthermore, just because you’ve gotten a laboratory prototype nanofactory working doesn’t mean the system is reliable enough yet for commercial (let alone household!) sale. The legal issues alone (e.g., profit-sharing among numerous IP owners, product liability issues, regulatory issues, etc.) could take some years to resolve.
 
LF: You coauthored the first comprehensive work analyzing physically self-replicating automatons. Do you feel that such machines could be a threat to the human species over the next 50 years?
 
RF: A possible threat? Certainly. But early nanofactories necessarily will be extremely primitive. They will be very limited in the composition and complexity of products they can build and in the types of chemical elements and feedstocks they can handle. They will be fairly unreliable and will require significant supervision and maintenance. They will be relatively expensive to own and operate. Over a period of perhaps one or two decades, nanofactory costs and capabilities will slowly improve and product costs will gradually drift downward toward the likely $1/kg regulatory floor, giving society some time to adjust to new threats as nanofactories become increasingly ubiquitous in our environment and economy.
 
Along the way, we should get a lot of practice dealing with emergencies and threats that are spawned by the personal nanofactory revolution. These will include novel but probably rare threats such as the first generations of rogue replicators that nanofactories could, if not adequately regulated, be programmed to build. Perhaps the least problematic danger of replicative technology is the risk of accident or malfunction. Engineers generally try to design products that work reliably and companies generally seek to sell reliable products to maintain customer goodwill and to avoid expensive product liability lawsuits.
 
But accidents do occasionally happen, and people can be counted on to figure out clever ways to abuse new technologies. Here again, our social system has established a set of progressive responses to deal efficiently with this sort of problem. The classic example is fire departments which handle both accidental fires and cases of deliberate arson. In similar manner, we will put in place the equivalent of fire departments to deal with undesirable events involving both replicative and nonreplicative nanomachinery in a fast and effective manner. These defensive capabilities will be made possible, and made necessary, by the existence of molecular manufacturing (MM), and will preserve human life and property thus allowing us to enjoy the innumerable benefits of this new technology.
 
Several additional points should probably be made.
 
First, replicators can be made “inherently safe”. Personal nanofactories will fall into this category, as their general-purpose manufacturing functions will give them the theoretical ability to replicate, even if they are partially disabled (by hardware or software locks) or are never actually used for this purpose by consumers. The products nanofactories can build could also be replicators. An “inherently safe” replicator is a replicating system which, by its very design, is inherently incapable of surviving mutation or of undergoing evolution (and thus evolving out of our control or developing an independent agenda), and which, equally importantly, does not compete with biology for resources (or worse, use biology as a raw materials resource).
 
One primary route for ensuring inherent safety is to employ the broadcast architecture for control and the vitamin architecture for materials, which eliminate the likelihood that the system can replicate outside of a very controlled and highly artificial setting, and there are numerous other routes and guidelines to achieve this end. Many dozens of additional safeguards may be incorporated into replicator designs to provide redundant embedded controls and thus an arbitrarily low probability of replicator malfunctions of various kinds, simply by selecting the appropriate design parameters as described in a comprehensive map of the replicator design space that was published in Kinematic Self-Replicating Machines, a book I coauthored with Ralph Merkle in 2004.
 
Of course, it must be conceded that while nanotechnology-based manufacturing systems and their products can be made safe, they could also be made dangerous. Just because free-range self-replicators may be undesirable, inefficient and unnecessary in normal commerce does not imply that they cannot be built, or that nobody will build them. Someone is bound to try it.
 
So, my second point is that unsafe replicators should be highly regulated or made illegal to build, own, or operate, with severe criminal sanctions for violations. Artificial kinematic self-replicating systems which are not inherently safe should not be designed or constructed, and indeed should be legally prohibited by appropriate juridical and economic sanctions, with these sanctions to be enforced in both national and international regimes. I repeat my call, first made in 2000, that there should be a carefully targeted moratorium or outright legal ban on the most dangerous kinds of molecular manufacturing systems, while still allowing the safe kinds of molecular manufacturing systems to be built — subject to appropriate monitoring and regulation commensurate with the lesser risk that they pose.
 
As a more general point, virtually every known technology comes in “safe” and “dangerous” flavors which necessarily must receive different legal treatment. The existence of a “safe” version of a technology does not preclude the existence of a “dangerous” version, and vice versa. The laws of physics permit both versions to exist. The most rational societal response has been to classify the various applications according to the risk of accident or abuse that each one poses, and then to regulate each application accordingly. The societal response to the tools and products of molecular manufacturing will be no different. Some MM-based tools and products will be deemed safe, and will be lightly regulated. Other MM-based tools and products will be deemed dangerous, and will be heavily regulated, or even legally banned in some cases.
 
Of course, the mere existence of legal restrictions or outright bans does not preclude the acquisition and abuse of a particular technology by a small criminal fraction of the population. The most constructive response to this class of threat is to increase monitoring efforts to improve early detection and to pre-position defensive instrumentalities capable of responding rapidly to these abuses, as I first recommended in 2000 in the context of molecular manufacturing.
 
Accordingly, my third point is that the relatively small number of unsafe replicators that get made illegally and released into the environment despite the severe sanctions against doing so can be contained and destroyed using a nanoshield defense which may be deployed locally, regionally, or even globally, well in advance of an outbreak. In the case of individual lawbreakers or rogue states that might build and deploy unsafe artificial mechanical replicators, the defenses already developed (or evolved in nature) against harmful biological replicators all have analogs in the nanomechanical world that should provide equally effective, and likely superior, defenses. Molecular nanotechnology will make possible ever more sophisticated methods of environmental monitoring and prophylaxis. However, advance planning and strategic foresight will be essential in maintaining this advantage.
 
LF: Many scientists consider self-replicating machines to be impossible. Can you summarize the main arguments or insights you think they are missing?
 
RF: First of all, replication has seemingly been found only in biological objects so there is the natural tendency to commit a basic logical error and conclude that, therefore, only biological objects can exhibit replication. But replication is actually a fairly simple function (i.e., pattern copying) that can be defined along a multidimensional spectrum of possibilities and may be embedded in a vast number of classes of systems, including nonbiological systems.
 
These possibilities range from extremely simple forms of replication (i.e., a spreading fire, falling dominoes, etc.) to more simple forms of replication (e.g., Penrose blocks, chemical autocatalysis, organic nanotube self-assembly, self-assembly of mechanical parts, etc.) to more complex forms of replication (e.g., in companies, cultural entities, and other mimetic substrates) to the most complex forms of replication (e.g., biology). Replication is a function that may operate on virtually any substrate and there is a vast literature on this subject.
 
We’re acutely aware of replication in biological systems because we confront the phenomenon almost every day in our lives. And replication in biological systems is fairly complex. But that’s not because replication is an inherently complex function. Rather, it is because biological systems must be very complex. Having to survive in the wild means that biology must be able to forage for and metabolize a broad range of nutrients, and exhibit numerous behaviors and functions wholly unrelated to replication — most importantly, the ability to evolve.
 
None of these capabilities are fundamentally required for replication. Replicators need not be required to forage. They can be restricted to just a single edible “food”. The replicating entity can have a relatively simple suite of nonreplicative behaviors. The replicator has no inherent need to evolve (and indeed should be prevented from doing so). Giving a machine the ability to replicate is no more difficult than giving it any other kind of moderately sophisticated behavior. There’s nothing magic about replication.
 
The second important thing the critics are missing is that self-replicating machines have, in fact, already been built and operated, directly falsifying the hypothesis that they are “impossible”. For example, a number of simple mechanical devices capable of primitive replication from simple substrates have been known since the 1950s, and self-replicating computer programs have been known at least since the 1970s.
 
The Japanese manufacturing company Fujitsu Fanuc Ltd. briefly operated the first “unmanned” robot factory in the early 1980s, then reopened an improved automated robot-building factory in April 1998 that uses larger two-armed robots to manufacture smaller robots with a minimum of human intervention, starting from inputs of robot parts, at the rate of 1000 daughter copies (of individual robots) per year; apparently a different part of the factory uses a distributive warehouse system for automatically assembling the larger robots.
 
Other robotic manufacturers such as Yasukawa Electric also use robots to make robot parts. The manufacturing base of most industrialized countries, of many states or provinces, and even of some individual large municipalities can produce most of the material artifacts of which the base itself is composed, constituting yet another existence proof for artificial or technological self-replication.
 
Finally, the world’s first macroscale autonomous machine replicator, made of LEGO® blocks, was built and operated in 2002. A video clip, available online (21 MB AVI), shows the machine crawling around a track, grabbing compound parts with a two-fingered gripper and assembling a second copy of itself near the center of the track, during a single run lasting several minutes. The arguments that have been advanced against the feasibility of artificial self replicating systems in general and assemblers in particular are of uniformly poor technical quality and display an astonishing ignorance of the relevant literature.
 
It may be recalled that in 1959, biologist Garrett Hardin reported that some geneticists had called genetic engineering “impossible” as well. Similar criticisms of machine replication survive today only among ill-informed authors who are obviously unfamiliar with the voluminous technical literature on the subject.
 
LF: How difficult do you think it would be to design an ecophage?
 
RF: Not nearly difficult enough. In fact, the design should be rather obvious to anyone who is “skilled in the art”.
 
The only remaining major showstopper-type technical uncertainty in ecophage design is the question of the reliability of the required mechanosynthetic reactions during room temperature operation. If room temperature DMS cannot be made sufficiently reliable, this could impose what I call “thermal censorship” on nanomechanical ecophagy in which the ambient-temperature self-replication of diamond-based ecophages that acquire feedstock from natural organic matter might be prevented by the unreliability of the required foundational mechanosynthetic reactions. This is a purely technical issue that urgently needs further study. I’ve put together a grant proposal for possible Lifeboat Foundation sponsorship to examine this critical issue and would urge readers to fund such research. This would give us a much more complete picture of the existential threat we may face from nanorobotic ecophagy and related nanoweaponry.
 
LF: You coauthored the Lifeboat Foundation’s NanoShield report. In a few sentences, can you summarize the main prescriptions you present for ensuring nanosafety in the 21st century?
 
RF: The NanoShield proposal is essentially an update and extension of the analysis in my original paper on the threat of global ecophagy. In the new proposal, we reiterate that the first step is to continuously monitor the environment for the characteristic observational signatures of emergent ecophagic threats or deployed nanoweapons. We recommend the establishment of a national government agency specifically tasked to undertake such monitoring and to coordinate all defensive responses, perhaps in collaboration with similarly tasked governmental entities in other countries around the world.
 
Once a threat is detected, three classes of response may be employed by the authorized agency:
 
(1) “Nonspecific immunity defenses” which are first-line defensive nanorobots having generic abilities to disable ecophages, using prepositioned stores of generic defensive nanorobots manufactured by a national or global network of defensive nanofactory stations that have been put in place well in advance of the outbreak of the threat.
 
(2) “Specific immunity defenses” that would not be launched until monitoring authorities had positively identified the ecophage and determined its known weaknesses, allowing a specific targeted response designed to attack only the particular ecophage in question.
 
(3) “Emergency defenses” that are effective against a wide range of ecophagy-types and constitute broad-brush emergency responses to a larger ecophagic threat — for example, to the discovery of ecophagic replicators too numerous for conventional cleanup or the observation of an uncharacterized ecophage or one having no known specific countermeasures that is replicating unexpectedly rapidly.
 
Readers interested in the details of this strategy should consult the NanoShield proposal.
 
LF: What are some of the most dangerous non-self-replicating nanoweapons you can imagine? How long do you think it will take for them to be developed and deployed?
 
RF: I personally most fear those threats whose operation will rob human beings of their free will and their inherent ability to make informed choices, as I’ve written about elsewhere. Our minds are what make us unique, both as individuals in comparison to each other and as a species in comparison to the rest of the known universe. Destroying a free human mind is therefore the deepest possible violation of our essence. Engineering such violations will probably take a fairly mature level of medical nanorobotics technology, so we may not face a serious threat from this source until perhaps the 2040s.
 
LF: What would your ideal nanofactory deployment scenario look like? What’s the best that could happen?
 
RF: The ideal deployment scenario would include personal nanofactories (PNs) in the possession of as many individual households as possible. Retail cost to consumers for a high-end model may be about $4400, similar in cost to a very nice modern appliance in the developed world. Low-end PNs could be available for substantially lower cost, possibly on a subsidized basis for those in relative poverty or for those living in third-world countries.
 
PNs would produce all manner of consumer goods including durables (such as shoes, wristwatches and toys) and nondurables (such as food and beverages) at an average cost of about $1/kg. Premium designs for more feature-laden or stylish products would be available for home manufacture for an additional fee, with both open-source and premium product designs easily downloadable from the internet. Multiple layers of regulation and embedded controls would prevent access to, or the unauthorized manufacture of, products known to pose a serious public threat such as ecophages or specific classes of nanoweapons. Thus could the greatest mass of humanity finally be liberated from the tyranny of material want, while maintaining the greatest possible public safety and reducing the environmental footprint of humanity to the barest minimum.