Toggle light / dark theme

Since ancient times people have been searching for the secret of immortality. Their quest has always been, without exception, about a physical item: a fountain, an elixir, an Alchemist’s remedy, a chalice, a pill, an injection of stem cells or a vial containing gene-repairing material. It has never been about an abstract concept.

Our inability to find a physical cure for ageing is explained by a simple fact: We cannot find it because it does not exist. It will never exist.

Those who believe that someday some guy is going to discover a pill or a remedy and give it to people so that we will all live forever are, regrettably, deluded.

I should highlight here that I refer to a cure for the ageing process in general, and not a cure for a specific medical disease. Biotechnology and other physical therapies are useful in alleviating many diseases and ailments, but these therapies will not be the answer to the basic biological process of ageing.

In a paper I published in the journal Rejuvenation Research I outline some of the reasons why I think biotechnology will not solve the ageing problem. I criticise projects such as SENS (which are based upon physical repairs of our ageing tissues) as being essentially useless against ageing. The editor’s rebuttal (being weak and mostly irrelevant) proved and strengthened my point. There are insurmountable basic psychological, anatomical, biological and evolutionary reasons why physical therapies against ageing will not work and will be unusable by the general public. Some of these reasons include pleiotropy, non-compliance, topological properties of cellular networks, non-linearity, strategic logistics, polypharmacy and tolerance, etc. etc.

So, am I claiming that we are doomed to live a life of age-related pathology and degeneration, and never be able to shake off the aging curse? No, far from it. I am claiming that it is quite possible, even inevitable, that ageing will be eliminated but this will not be achieved through a physical intervention based on bio-medicine or bio-technology. Ageing will be eliminated through fundamental evolutionary and adaptation mechanisms, and this process will take place independently of whether we want it or not.

It works like this: We now age and die because we become unable to repair random background damage to our tissues. Resources necessary for this have been allocated by the evolutionary process to our germ cell DNA (in order to assure the survival of the species) and have been taken away from our bodily cells. Until now, our environment was so full of dangers that it was more thermodynamically advantageous for nature to maintain us up to a certain age, until we have progeny and then die, allowing our progeny to continue life.

However, this is now changing. Our environment is becoming increasingly more secure and protective. Our technology protects us against dangers such as infections, famine and accidents. We become increasingly embedded into the network of a global techno-cultural society which depends upon our intelligence in order to survive. There will come a time when biological resources spent to bring up children would be better spent in protecting us instead, because it would be more economical for nature to maintain an existing, well-embedded human, rather than allow it to die and create a new one who would then need more resources in order to re-engage with the techno-cultural network. Disturbing the network by taking away its constituents and trying to re-engage new inexperienced ones is not an ideal action and therefore it will not be selected by evolution.Alchemist complex

The message is clear: You have more chances of defying ageing if, instead of waiting for someone to discover a pill to make you live longer, you become a useful part of a wider network and engage with a technological society. The evolutionary process will then ensure that you live longer-as long as you are useful to the whole.

Further reading
http://ieet.org/index.php/IEET/more/kyriazis20121031

The Seven Fallacies of Aging

The Life Extension Hubris: Why biotechnology is unlikely to be the answer to ageing


http://www.ncbi.nlm.nih.gov/pubmed/25072550
http://arxiv.org/abs/1402.6910

Would you have your brain preserved? Do you believe your brain is the essence of you?

To noted American PhD Neuroscientist and Futurist, Ken Hayworth, the answer is an emphatic, “Yes.” He is currently developing machines and techniques to map brain tissue at the nanometer scale — the key to encoding our individual identities.

A self-described transhumanist and President of the Brain Preservation Foundation, Hayworth’s goal is to perfect existing preservation techniques, like cryonics, as well as explore and push evolving opportunities to effect a change on the status quo. Currently there is no brain preservation option that offers systematic, scientific evidence as to how much human brain tissue is actually preserved when undergoing today’s experimental preservation methods. Such methods include vitrification, the procedure used in cryonics to try and prevent human organs from freezing and being destroyed when tissue is cooled for cryopreservation.

Hayworth believes we can achieve his vision of preserving an entire human brain at an accepted and proven standard within the next decade. If Hayworth is right, is there a countdown to immortality?

To find out more, please take a look at the Galactic Public Archives’ newest video. We’d love to hear your thoughts.

Cheers!

What follows is my position piece for London’s FutureFest 2013, the website for which no longer exists.

Medicine is a very ancient practice. In fact, it is so ancient that it may have become obsolete. Medicine aims to restore the mind and body to their natural state relative to an individual’s stage in the life cycle. The idea has been to live as well as possible but also die well when the time came. The sense of what is ‘natural’ was tied to statistically normal ways of living in particular cultures. Past conceptions of health dictated future medical practice. In this respect, medical practitioners may have been wise but they certainly were not progressive.

However, this began to change in the mid-19th century when the great medical experimenter, Claude Bernard, began to champion the idea that medicine should be about the indefinite delaying, if not outright overcoming, of death. Bernard saw organisms as perpetual motion machines in an endless struggle to bring order to an environment that always threatens to consume them. That ‘order’ consists in sustaining the conditions needed to maintain an organism’s indefinite existence. Toward this end, Bernard enthusiastically used animals as living laboratories for testing his various hypotheses.

Historians identify Bernard’s sensibility with the advent of ‘modern medicine’, an increasingly high-tech and aspirational enterprise, dedicated to extending the full panoply of human capacities indefinitely. On this view, scientific training trumps practitioner experience, radically invasive and reconstructive procedures become the norm, and death on a physician’s watch is taken to be the ultimate failure. Humanity 2.0 takes this way of thinking to the next level, which involves the abolition of medicine itself. But what exactly would that mean – and what would replace it?

The short answer is bioengineering, the leading edge of which is ‘synthetic biology’. The molecular revolution in the life sciences, which began in earnest with the discovery of DNA’s function in 1953, came about when scientists trained in physics and chemistry entered biology. What is sometimes called ‘genomic medicine’ now promises to bring an engineer’s eye to improving the human condition without presuming any limits to what might count as optimal performance. In that case, ‘standards’ do not refer to some natural norm of health, but to features of an organism’s design that enable its parts to be ‘interoperable’ in service of its life processes.

In this brave new ‘post-medical’ world, there is always room for improvement and, in that sense, everyone may be seen as ‘underperforming’ if not outright disabled. The prospect suggests a series of questions for both the individual and society: (1) Which dimensions of the human condition are worth extending – and how far should we go? (2) Can we afford to allow everyone a free choice in the matter, given the likely skew of the risky decisions that people might take? (3) How shall these improvements be implemented? While bioengineering is popularly associated with nano-interventions inside the body, of course similarly targeted interventions can be made outside the body, or indeed many bodies, to produce ‘smart habitats’ that channel and reinforce desirable emergent traits and behaviours that may even leave long-term genetic traces.

However these questions are answered, it is clear that people will be encouraged, if not legally required, to learn more about how their minds and bodies work. At the same time, there will no longer be any pressure to place one’s fate in the hands of a physician, who instead will function as a paid consultant on a need-to-know and take-it-or-leave-it basis. People will take greater responsibility for the regular maintenance and upgrading of their minds and bodies – and society will learn to tolerate the diversity of human conditions that will result from this newfound sense of autonomy.

Julian Assange’s 2014 book When Google Met WikiLeaks consists of essays authored by Assange and, more significantly, the transcript of a discussion between Assange and Google’s Eric Schmidt and Jared Cohen.
As should be of greatest interest to technology enthusiasts, we revisit some of the uplifting ideas from Assange’s philosophy that I picked out from among the otherwise dystopian high-tech future predicted in Cypherpunks (2012). Assange sees the Internet as “transitioning from an apathetic communications medium into a demos – a people” defined by shared culture, values and aspirations (p. 10). This idea, in particular, I can identify with.
Assange’s description of how digital communication is “non-linear” and compromises traditional power relations is excellent. He notes that relations defined by physical resources and technology (unlike information), however, continue to be static (p. 67). I highlight this as important for the following reason. It profoundly strengthens the hypothesis that state power will also eventually recede and collapse in the physical world, with the spread of personal factories and personal enhancement technologies (analogous to personal computers) like 3-d printers and synthetic life-forms, as explained in my own techno-liberation thesis and in the work of theorists like Yannick Rumpala.
When Google Met Wikileaks tells, better than any other text, the story of the clash of philosophies between Google and WikiLeaks – despite Google’s Eric Schmidt assuring Assange that he is “sympathetic to you, obviously”. Specifically, Assange draws our attention to the worryingly close relationship between Google and the militarized US police state in the post-9/11 era. Fittingly, large portions of the book (p. 10–16, 205–220) are devoted to giving Assange’s account of the now exposed world-molesting US regime’s war on WikiLeaks and its cowardly attempts to stifle transparency and accountability.
The publication of When Google Met WikiLeaks is really a reaction to Google chairman Eric Schmidt’s 2013 book The New Digital Age (2013), co-authored with Google Ideas director Jared Cohen. Unfortunately, I have not studied that book, although I intend to pen a fitting enough review for it in due course to follow on from this review. It is safe to say that Assange’s own review in the New York Times in 2013 was quite crushing enough. However, nothing could be more devastating to its pro-US thesis than the revelations of widespread illegal domestic spying exposed by Edward Snowden, which shook the US and the entire world shortly after The New Digital Age’s very release.
Assange’s review of The New Digital Age is reprinted in his book (p. 53–60). In it, he describes how Schmidt and Cohen are in fact little better than State Department cronies (p. 22–25, 32, 37–42), who first met in Iraq and were “excited that consumer technology was transforming a society flattened by United States military occupation”. In turn, Assange’s review flattens both of these apologists and their feeble pretense to be liberating the world, tearing their book apart as a “love song” to a regime, which deliberately ignores the regime’s own disgraceful record of human rights abuses and tries to conflate US aggression with free market forces (p. 201–203).
Cohen and Schmidt, Assange tells us, are hypocrites, feigning concerns about authoritarian abuses that they secretly knew to be happening in their own country with Google’s full knowledge and collaboration, yet did nothing about (p. 58, 203). Assange describes the book, authored by Google’s best, as a shoddily researched, sycophantic dance of affection for US foreign policy, mocking the parade of praise it received from some of the greatest villains and war criminals still at large today, from Madeleine Albright to Tony Blair. The authors, Assange claims, are hardly sympathetic to the democratic internet, as they “insinuate that politically motivated direct action on the internet lies on the terrorist spectrum” (p. 200).
As with Cypherpunks, most of Assange’s book consists of a transcript based on a recording that can be found at WikiLeaks, and in drafting this review I listened to the recording rather than reading the transcript in the book. The conversation moves in what I thought to be three stages, the first addressing how WikiLeaks operates and the kind of politically beneficial journalism promoted by WikiLeaks. The second stage of the conversation addresses the good that WikiLeaks believes it has achieved politically, with Assange claiming credit for a series of events that led to the Arab Spring and key government resignations.
When we get to the third stage of the conversation, something of a clash becomes evident between the Google chairman and WikiLeaks editor-in-chief, as Schmidt and Cohen begin to posit hypothetical scenarios in which WikiLeaks could potentially cause harm. The disagreement evident in this part of the discussion is apparently shown in Schmidt and Cohen’s book: they alleged that “Assange, specifically” (or any other editor) lacks sufficient moral authority to decide what to publish. Instead, we find special pleading from Schmidt and Cohen for the state: while regime control over information in other countries is bad, US regime control over information is good (p. 196).
According to the special pleading of Google’s top executives, only one regime – the US government and its secret military courts – has sufficient moral authority to make decisions about whether a disclosure is harmful or not. Assange points out that Google’s brightest seem eager to avoid explaining why this one regime should have such privilege, and others should not. He writes that Schmidt and Cohen “will tell you that open-mindedness is a virtue, but all perspectives that challenge the exceptionalist drive at the heart of American foreign policy will remain invisible to them” (p. 35).
Assange makes a compelling argument that Google is not immune to the coercive power of the state in which it operates. We need to stop mindlessly chanting “Google is different. Google is visionary. Google is the future. Google is more than just a company. Google gives back to the community. Google is a force for good” (p. 36). It’s time to tell it how it is, and Assange knows just how to say it.
Google is becoming a force for bad, and is little different from any other massive corporation led by ageing cronies of the narrow-minded state that has perpetrated the worst outrages against the open and democratic internet. Google “Ideas” are myopic, close-minded, and nationalist (p. 26), and the corporate-state cronies who think them up have no intention to reduce the number of murdered journalists, torture chambers and rape rooms in the world or criticize the regime under which they live. Google’s politics are about keeping things exactly as they are, and there is nothing progressive about that vision.
To conclude with what was perhaps the strongest point in the book, Assange quotes NYT columnist Tom Friedman. We are warned by Friedman as early as 1999 that Silicon Valley is led less now by the mercurial “hidden hand” of the market than the “hidden fist” of the US state. Assange argues, further, that the close relations between Silicon Valley and the regime in Washington indicate Silicon Valley is now like a “velvet glove” on the “hidden fist” of the regime (p. 43). Similarly, Assange warns those of us of a libertarian persuasion that the danger posed by the state has two horns – one government, the other corporate – and that limiting our attacks to one of them means getting gored on the other. Despite its positive public image, Google’s (and possibly also Facebook’s) ties with the US state for the purpose of monitoring the US pubic deserve a strong public backlash.

Among transhumanists, Nick Bostrom is well-known for promoting the idea of ‘existential risks’, potential harms which, were they come to pass, would annihilate the human condition altogether. Their probability may be relatively small, but the expected magnitude of their effects are so great, so Bostrom claims, that it is rational to devote some significant resources to safeguarding against them. (Indeed, there are now institutes for the study of existential risks on both sides of the Atlantic.) Moreover, because existential risks are intimately tied to the advancement of science and technology, their probability is likely to grow in the coming years.

Contrary to expectations, Bostrom is much less concerned with ecological suicide from humanity’s excessive carbon emissions than with the emergence of a superior brand of artificial intelligence – a ‘superintelligence’. This creature would be a human artefact, or at least descended from one. However, its self-programming capacity would have run amok in positive feedback, resulting in a maniacal, even self-destructive mission to rearrange the world in the image of its objectives. Such a superintelligence may appear to be quite ruthless in its dealings with humans, but that would only reflect the obstacles that we place, perhaps unwittingly, in the way of the realization of its objectives. Thus, this being would not conform to the science fiction stereotype of robots deliberately revolting against creators who are now seen as their inferiors.

I must confess that I find this conceptualisation of ‘existential risk’ rather un-transhumanist in spirit. Bostrom treats risk as a threat rather than as an opportunity. His risk horizon is precautionary rather than proactionary: He focuses on preventing the worst consequences rather than considering the prospects that are opened up by whatever radical changes might be inflicted by the superintelligence. This may be because in Bostrom’s key thought experiment, the superintelligence turns out to be the ultimate paper-clip collecting machine that ends up subsuming the entire planet to its task, destroying humanity along the way, almost as an afterthought.

But is this really a good starting point for thinking about existential risk? Much more likely than total human annihilation is that a substantial portion of humanity – but not everyone – is eliminated. (Certainly this captures the worst case scenarios surrounding climate change.) The Cold War remains the gold standard for this line of thought. In the US, the RAND Corporation’s chief analyst, Herman Kahn — the model for Stanley Kubrick’s Dr Strangelove – routinely, if not casually, tossed off scenarios of how, say, a US-USSR nuclear confrontation would serve to increase the tolerance for human biological diversity, due to the resulting proliferation of genetic mutations. Put in more general terms, a severe social disruption provides a unique opportunity for pursuing ideals that might otherwise be thwarted by a ‘business as usual’ policy orientation.

Here it is worth recalling that the Cold War succeeded on its own terms: None of the worst case scenarios were ever realized, even though many people were mentally prepared to make the most of the projected adversities. This is one way to think about how the internet itself arose, courtesy the US Defense Department’s interest in maintaining scientific communications in the face of attack. In other words, rather than trying to prevent every possible catastrophe, the way to deal with ‘unknown unknowns’ is to imagine that some of them have already come to pass and redesign the world accordingly so that you can carry on regardless. Thus, Herman Kahn’s projection of a thermonuclear future provided grounds in the 1960s for the promotion of, say, racially mixed marriages, disability-friendly environments, and the ‘do more with less’ mentality that came to characterize the ecology movement.

Kahn was a true proactionary thinker. For him, the threat of global nuclear war raised Joseph Schumpeter’s idea of ‘creative destruction’ to a higher plane, inspiring social innovations that would be otherwise difficult to achieve by conventional politics. Historians have long noted that modern warfare has promoted spikes in innovation that in times of peace are then subject to diffusion, as the relevant industries redeploy for civilian purposes. We might think of this tendency, in mechanical terms, as system ‘overdesign’ (i.e. preparing for the worst but benefitting even if the worst doesn’t happen) or, more organically, as a vaccine that converts a potential liability into an actual benefit.

In either case, existential risk is regarded in broadly positive terms, specifically as an unprecedented opportunity to extend the range of human capability, even under radically changed circumstances. This sense of ‘antifragility’, as the great ‘black swan’ detector Nicholas Taleb would put it, is the hallmark of our ‘risk intelligence’, the phrase that the British philosopher Dylan Evans has coined for a demonstrated capacity that people have to make step change improvements in their lives in the face of radical uncertainty. From this standpoint, Bostrom’s superintelligence concept severely underestimates the adaptive capacity of human intelligence.

Perhaps the best way to see just how much Bostrom shortchanges humanity is to note that his crucial thought experiment requires a strong ontological distinction between humans and superintelligent artefacts. Where are the cyborgs in this doomsday scenario? Reading Bostrom reminds me that science fiction did indeed make progress in the twentieth century, from the world of Karl Čapek’s Rossum’s Universal Robots in 1920 to the much subtler blending of human and computer futures in the works of William Gibson and others in more recent times.

Bostrom’s superintelligence scenario began to be handled in more sophisticated fashion after the end of the First World War, popularly under the guise of ‘runaway technology’, a topic that received its canonical formulation in Langdon Winner’s 1977 Autonomous Technology: Technics out of Control, a classic in the field of science and technology of studies. Back then the main problem with superintelligent machines was that they would ‘dehumanize’ us, less because they might dominate us but more because we might become like them – perhaps because we feel that we have invested our best qualities in them, very much like Ludwig Feuerbach’s aetiology of the Judaeo-Christian God. Marxists gave the term ‘alienation’ a popular spin to capture this sentiment in the 1960s.

Nowadays, of course, matters have been complicated by the prospect of human and machine identities merging together. This goes beyond simply implanting silicon chips in one’s brain. Rather, it involves the complex migration and enhancement of human selves in cyberspace. (Sherry Turkle has been the premier ethnographer of this process in children.) That such developments are even possible points to a prospect that Bostrom refuses to consider, namely, that to be ‘human’ is to be only contingently located in the body of Homo sapiens. The name of our species – Homo sapiens – already gives away the game, because our distinguishing feature (so claimed Linnaeus) had nothing to do with our physical morphology but with the character of our minds. And might not such a ‘sapient’ mind better exist somewhere other than in the upright ape from which we have descended?

The prospects for transhumanism hang on the answer to this question. Aubrey de Grey’s indefinite life extension project is about Homo sapiens in its normal biological form. In contrast, Ray Kurzweil’s ‘singularity’ talk of uploading our consciousness into indefinitely powerful computers suggests a complete abandonment of the ordinary human body. The lesson taught by Langdon Winner’s historical account is that our primary existential risk does not come from alien annihilation but from what social psychologists call ‘adaptive preference formation’. In other words, we come to want the sort of world that we think is most likely, simply because that offers us the greatest sense of security. Thus, the history of technology is full of cases in which humans have radically changed their lives to adjust to an innovation whose benefits they reckon outweigh the costs, even when both remain fundamentally incalculable. Success in the face such ‘existential risk’ is then largely a matter of whether people – perhaps of the following generation – have made the value shifts necessary to see the changes as positive overall. But of course, it does not follow that those who fail to survive the transition or have acquired their values before this transition would draw a similar conclusion.

Written By: — Sigularity Hub

neuro-modulation

Brain implants here we come.

DARPA just announced the ElectRX program, a $78.9 million attempt to develop miniscule electronic devices that interface directly with the nervous system in the hopes of curing a bunch of chronic conditions, ranging from the psychological (depression, PTSD) to the physical (Crohn’s, arthritis). Of course, the big goal here is to usher in a revolution in neuromodulation—that is, the science of modulating the nervous system to fix an underlying problem.

Read more

In Virtually Human, you’ll have the privilege of meeting Bina48, the world’s most sentient robot, commissioned by Martine Rothblatt and created by Hanson Robotics. Bina48 is a nascent Mindclone of Martine’s wife that can engage in conversation, answer questions, and even have spontaneous thoughts that are derived from multimedia data in a Mindfile created by the real Bina (be sure to check her out on Twitter too – @iBina48!).

If you’re personally active on Twitter or Facebook, share photos through Instagram, or blog regularly, you’re also already on your way to creating a Mindfile – a digital database of your thoughts, memories, feelings, and opinions. And soon, this Mindfile can be made conscious with special software—Mindware—that mimics the way human brains organize information, create emotions and achieve self-awareness. Virtually Human is the only book to examine the ethical issues relating to cyberconsciousness and Rothblatt, with a Ph.D. in medical ethics, is uniquely qualified to lead the dialogue. On sale Sept 9th, I wanted to be sure everyone at Lifeboat knew about it, and you can pre-order your copy today: http://smarturl.it/vhaz and http://smarturl.it/bnVh.

If the controversy over genetically modified organisms (GMOs) tells us something indisputable, it is this: GMO food products from corporations like Monsanto are suspected to endanger health. On the other hand, an individual’s right to genetically modify and even synthesize entire organisms as part of his dietary or medical regimen could someday be a human right.
The suspicion that agri-giant companies do harm by designing crops is legitimate, even if evidence of harmful GMOs is scant to absent. Based on their own priorities and actions, we should have no doubt that self-interested corporations disregard the rights and wellbeing of local producers and consumers. This makes agri-giants producing GMOs harmful and untrustworthy, regardless of whether individual GMO products are actually harmful.
Corporate interference in government of the sort opposed by the Occupy Movement is also connected with the GMO controversy, as the US government is accused of going to great lengths to protect “stakeholders” like Monsanto via the law. This makes the GMO controversy more of a business and political issue rather than a scientific one, as I argued in an essay published at the Institute for Ethics and Emerging Technologies (IEET). Attacks on science and scientists themselves over the GMO controversy are not justified, as the problem lies solely with a tiny handful of businessmen and corrupt politicians.
An emerging area that threatens to become as controversial as GMOs, if the American corporate stranglehold on innovation is allowed to shape its future, is synthetic biology. In his 2014 book, Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, top synthetic biologist J. Craig Venter offers powerful words supporting a future shaped by ubiquitous synthetic biology in our lives:

“I can imagine designing simple animal forms that provide novel sources of nutrients and pharmaceuticals, customizing human stem cells to regenerate a damaged, old, or sick body. There will also be new ways to enhance the human body as well, such as boosting intelligence, adapting it to new environments such as radiation levels encountered in space, rejuvenating worn-out muscles, and so on”

In his own words, Venter’s vision is no less than “a new phase of evolution” for humanity. It offers what Venter calls the “real prize”: a family of designer bacteria “tailored to deal with pollution or to absorb excess carbon dioxide or even meet future fuel needs”. Greater than this, the existing tools of synthetic biology are transhumanist in nature because they create limitless means for humans to enhance themselves to deal with harsher environments and extend their lifespans.
While there should be little public harm in the eventual ubiquity of the technologies and information required to construct synthetic life, the problems of corporate oligopoly and political lobbying are threatening synthetic biology’s future as much as they threaten other facets of human progress. The best chance for an outcome that will be maximally beneficial for the world relies on synthetic biology taking a radically different direction to GM. That alternative direction, of course, is an open source future for synthetic biology, as called for by Canadian futurist Andrew Hessel and others.
Calling himself a “catalyst for open-source synthetic biology”, Hessel is one of the growing number of experts who reject biotechnology’s excessive use of patents. Nature notes that his Pink Army Cooperative venture relies instead on “freely available software and biological parts that could be combined in innovative ways to create individualized cancer treatments — without the need for massive upfront investments or a thicket of protective patents”.
While offering some support to the necessity of patents, J. Craig Venter more importantly praises the annual International Genetically Engineered Machine (iGEM) competition in his book as a means of encouraging innovation. He specifically names the Registry of Standard Biological Parts, an open source library from which to obtain BioBricks, and describes this as instrumental for synthetic biology innovation. Likened to bricks of Lego that can be snapped together with ease by the builder, BioBricks are prepared standard pieces of genetic code, with which living cells can be newly equipped and operated as microscopic chemical factories. This has enabled students and small companies to reprogram life itself, taking part in new discoveries and innovations that would have otherwise been impossible without the direct supervision of the world’s best-trained teams of biologists.
There is a similar movement towards popular synthetic biology by the name of biohacking, promoted by such experts as Ellen Jorgensen. This compellingly matches the calls for greater autonomy for individuals and small companies in medicine and human enhancement. Unfortunately, despite their potential to greatly empower consumers and farmers, such developments have not yet found resonance with anti-GMO campaigners, whose outright rejection of biotechnology has been described as anti-science and “bio-luddite” by techno-progressives. It is for this reason that emphasizing the excellent potential of biotechnology for feeding and fuelling a world plagued by dwindling resources is important, and a focus on the ills of big business rather than imagined spectres emerging from science itself is vital.
The concerns of anti-GMO activists would be addressed better by offering support to an alternative in the form of “do-it-yourself” biotechnology, rather than rejecting sciences and industries that are already destined to be a fundamental part of humanity’s future. What needs to be made is a case for popular technology, in hope that we can reject the portrayal of all advanced technology as an ally of powerful states and corporations and instead unlock its future as a means of liberation from global exploitation and scarcity.
While there are strong arguments that current leading biotechnology companies feel more secure and perform better when they retain rigidly enforced intellectual property rights, Andrew Hessel rightly points out that the open source future is less about economic facts and figures than about culture. The truth is that there is a massive cultural transition taking place. We can see a growing hostility to patents, and an increasing popular enthusiasm for open source innovation, most promisingly among today’s internet-borne youth.
In describing a cultural transition, Hessel is acknowledging the importance of the emerging body of transnational youth whose only ideology is the claim that information wants to be free, and we find the same culture reflected in the values of organizations like WikiLeaks. Affecting every facet of science and technology, the elite of today’s youth are crying out for a more open, democratic, transparent and consumer-led future at every level.

By Harry J. Bentham - More articles by Harry J. Bentham

Originally published at h+ Magazine on 21 August 2014

. @IEET. @HJBentham. @ClubOfINFO. #nature. #philosophy. #ebook.

There is often imagined to be a struggle between humans and nature. How does this struggle originate, and what is its resolution? Such a question is central to some religious traditions, and has much room to be explored in literature.
Nature is used to describe everything that lies outside of human agency. Disasters and disease often fall under this description, although there is usually some element of human blame in such problems. Some people try to live or eat according to preferences that they call “natural”. In my view, this is a fallacy. When we use the word natural with its only workable definition, to represent something distinct from human agency, it means that anything resulting from human agency is unnatural and so it cannot be natural (even if it imitates nature). When it applies to human choices, natural is only an arbitrary label used by people to refer to anything they approve of.
Why would humans battle against nature? Perhaps suffering can be described as the most imposing and constantly surfacing part of nature in our lives, because it is ultimately caused by the laws of biology rather than human wills. We humans have vulnerable bodies and we rely on vulnerable, easily destroyed brains to exist, although it is very apparent that we would prefer not to be exposed in this way. Because this is so, the struggle to overcome humanity’s physical and medical vulnerabilities can be depicted as a battle against natureour nature.
The assertion that seeking invulnerability against suffering is an escape from cruel inevitabilities biology is certainly reflected in some philosophers, such as Friedrich Nietzsche. Despite seeing the transformation of humanity into a higher creature as a noble task, Nietzsche saw this as necessarily involving suffering. As for the desire to end suffering, he deplored this as a product of weakness and the inability to accept the forces outside human control.
Nietzsche addressed the way in which religious traditions give moral assurances against suffering. Religions offer promises of justice that run contrary to the natural order in which the strong are favored over the weak. The Christian doctrines of the fall of man and eternal Heaven are alike in their view that the world we know is flawed and polluted, and humans are instead meant to endure in paradise. Such myths have been easy for people to buy into, because it is often easier to tolerate suffering in the world and move on if one believes in a supernatural alternativea cosmic safety net for the weak and the deadafter it.
The other manifestation of our weak human refusal to accept suffering, but which actually works, is the desire to use science and technology to thwart suffering. Once we remove the supernatural, the only remaining assurances against suffering can necessarily come from the modernity of technology. In this sense, the idea of a technological singularity, after which the very best technology permitted by the laws of physics will get within reach, represents the only “true” paradise that could ever be inherited.
But what if a paradise, an all-encompassing solution to suffering, is impossible? A universe with high suffering is inherently more likely than a universe without it, because the “anthropic principle” does not contain any guarantees against mortality and suffering. The anthropic principle says human life exists only because this is a requisite for us to notice our own existence. Therefore, the anthropic principle leads to a universe that merely tolerates conscious life for a limited time, rather than enriches it or sustains it. Contrary to religious claims, the universe in which we reside is not “designed” for us to inhabit, and we know this because it is mostly uninhabitable. The vacuum of space cannot be inhabited, and most locations in the universe have the wrong temperature or lack the elements needed for life to exist. What is conspicuous is that the universal constants allow us to exist, not in any kind of ideal state but just enough.
One can relate “extropy” (Kevin Kelly’s usage of the term) to the anthropic principle. Where the anthropic principle explains the human-friendly properties of the universe as existing simply because a human observer exists, extropy the guarantee of something even more complex and intelligent in the future. More than simply tolerating human life, then, a universe where humans exist includes the inevitability that human intelligence will evolve into or produce something far more enduring and glorious. After all, we are no pinnacle, and we are still witnessing an ongoing explosion of intelligence through such creations as the internet and the race to develop powerful AI.
Take a look at history and current cosmology, and we will see that extropy looks very valid. Humans have undeniably been improving their existence, and this is arguably due to the universe being filled with resources that are very friendly to our needs. There are seemingly infinite resources and tools in the universe for humans to exploit to improve their civilization, and the anthropic principle alone did not necessary contain any guarantee that such useful “equipment” would exist. Conceivably, there could be worlds where intelligent life exists but there can be no fire. There might also have been no sufficient quantities of ores or effective tools to build an advanced civilization. Certainly, humans have a lot more at their fingertips than the minimal equipment promised to them by the anthropic principle. Although there is not necessarily a God to thank for it, there is a lot to be thankful for.
What if there was a world where conditions were less favorable? Perhaps, if humans were too vulnerable, there would be less potential to develop civilization, and instead all thought would be dedicated to staying alive. A work of fiction I have dedicated to exploring this theme, The Traveller and Pandemonium, takes place in a more hostile universe than ours (as permitted in the “many-worlds hypothesis”), where a traveler is not convinced by the idea that humanity could have arisen in such unfavorable conditions. Determining that humanity belongs in another world, he searches vainly for the solution.
The traveler keeps his quest secret, aware that most people will condemn him as a religious nut searching for Heaven if he talks about it, but there is actually a rational basis for his view that humans belong elsewhere. The world in which he resides is genuinely toxic and inhospitable to humanity, humans are vulnerable to every creature in the world around them, and they are rapidly going extinct. It looks like a human colonization gone awry on a hostile alien world, although no-one knows how it got that way.
The two strategies against suffering in the world can be described as surgical and spiritual. Those who advocate “spiritual” solutions are only offering window-dressing to humanity while they greedily seek power. Those who advocate “surgical” solutions might not seem beautiful or perfect in what they promise, but they are the only ones promising something real, offering something tangible that could really fight away the uglier characteristics of the universe and save what can be saved.

By Harry J. Bentham - More articles by Harry J. Bentham

Originally published at the Institute for Ethics and Emerging Technologies on 17 July 2014

Written By: — Singularity Hub
http://cdn.singularityhub.com/wp-content/uploads/2014/07/universe-comes-to-know-itself-1.jpg
In his latest video, host of National Geographic’s Brain Games and techno-poet, Jason Silva, explores the universe’s tendency to self-organize. Biology, he says, seems to have agency and directionality toward greater complexity, and humans are the peak.

“It’s like human beings seem to be the cutting edge,” Silva says. “The evolutionary pinnacle of self-awareness becoming aware of its becoming.”

Read more