Toggle light / dark theme

California Dreams Video 1 from IFTF on Vimeo.

INSTITUTE FOR THE FUTURE ANNOUNCES CALIFORNIA DREAMS:
A CALL FOR ENTRIES ON IMAGINING LIFE IN CALIFORNIA IN 2020

Put yourself in the future and show us what a day in your life looks like. Will California keep growing, start conserving, reinvent itself, or collapse? How are you living in this new world? Anyone can enter,anyone can vote; anyone can change the future of California!

California has always been a frontier—a place of change and innovation, reinventing itself time and again. The question is, can California do it again? Today the state is facing some of its toughest challenges. Launching today, IFTF’s California Dreams is a competition with an urgent challenge to recruit citizen visions of the future of California—ideas for what it will be like to live in the state in the next decade—to start creating a new California dream.

California Dreams calls upon the public look 3–10 years into the future and tell a story about a single day in their own life. Videos, graphical entries, and stories will be accepted until January 15, 2011. Up to five winners will be flown to Palo Alto, California in March to present their ideas and be connected to other innovative thinkers to help bring these ideas to life. The grand prize winner will receive the $3,000 IFTF Roy Amara Prize for Participatory Foresight.

“We want to engage Californians in shaping their lives and communities” said Marina Gorbis, Executive Director of IFTF. “The California Dreams contest will outline the kinds of questions and dilemmas we need to be analyzing, and provoke people to ask deep questions.”

Entries may come from anyone anywhere and can include, but are not limited to, the following: Urban farming, online games replacing school, a fast food tax, smaller, sustainable housing, rise in immigrant entrepreneurs, mass migration out of state. Participants are challenged to use IFTF’s California Dreaming map as inspiration, and picture themselves in the next decade, whether it be a future of growth, constraint, transformation, or collapse.

The grand prize, called the Roy Amara Prize, is named for IFTF’s long-time president Roy Amara (1925−2000) and is part of a larger program of social impact projects at IFTF honoring his legacy, known as The Roy Amara Fund for Participatory Foresight, the Fund uses participatory tools to translate foresight research into concrete actions that address future social challenges.

PANEL OF COMPETITION JUDGES

Gina Bianchini, Entrepreneur in Residence, Andreessen Horowitz

Alexandra Carmichael, Research Affiliate, Institute for the Future, Co-Founder, CureTogether, Director, Quantified Self

Bill Cooper, The Urban Water Research Center, UC Irvine

Poppy Davis, Executive Director, EcoFarm

Jesse Dylan, Founder of FreeForm, Founder of Lybba

Marina Gorbis, Executive Director, Institute for the Future

David Hayes-Bautista, Professor of Medicine and Health Services,UCLA School of Public Health

Jessica Jackley, CEO, ProFounder

Xeni Jardin, Partner, Boing Boing, Executive Producer, Boing Boing Video

Jane McGonigal, Director of Game Research and Development, Institute for the Future

Rachel Pike, Clean Tech Analyst, Draper Fisher Jurvetson

Howard Rheingold, Visiting Professor, Stanford / Berkeley, and theInstitute of Creative Technologies

Tiffany Shlain, Founder, The Webby Awards
Co-founder International Academy of Digital Arts and Sciences

Larry Smarr
Founding Director, California Institute for Telecommunications and Information Technology (Calit2), Professor, UC San Diego

DETAILS

WHAT: An online competition for visions of the future of California in the next 10 years, along one of four future paths: growth, constraint, transformation, or collapse. Anyone can enter, anyone can vote, anyone can change the future of California.

WHEN: Launch – October 26, 2010
Deadline for entries — January 15, 2011
Winners announced — February 23, 2011
Winners Celebration — 6 – 9 pm March 11, 2011 — open to the public

WHERE: http://californiadreams.org

For more information on the California Dreaming map or to download the pdf, click here.

Call for Essays:

The Singularity Hypothesis
A Scientific and Philosophical Assessment

Edited volume, to appear in The Frontiers Collection, Springer

Does an intelligence explosion pose a genuine existential risk, or did Alan Turing, Steven Hawking, and Alvin Toffler delude themselves with visions ‘straight from Cloud Cuckooland’? Should the notions of superintelligent machines, brain emulations and transhumans be ridiculed, or is it that skeptics are the ones who suffer from short sightedness and ‘carbon chauvinism’? These questions have remained open because much of what we hear about the singularity originates from popular depictions, fiction, artistic impressions, and apocalyptic propaganda.

Seeking to promote this debate, this edited, peer-reviewed volume shall be concerned with scientific and philosophical analysis of the conjectures related to a technological singularity. We solicit scholarly essays offering a scientific and philosophical analysis of this hypothesis, assess its empirical content, examine relevant evidence, or explore its implications. Commentary offering a critical assessment of selected essays may also be solicited.

Important dates:

  • Extended abstracts (500–1,000 words): 15 January 2011
  • Full essays: (around 7,000 words): 30 September 2011
  • Notifications: 30 February 2012 (tentative)
  • Proofs: 30 April 2012 (tentative)

We aim to get this volume published by the end of 2012.

Purpose of this volume

Central questions

Extended abstracts are ideally short (3 pages, 500 to 1000 words), focused (!), relating directly to specific central questions and indicating how they will be treated in the full essay.

Full essays are expected to be short (15 pages, around 7000 words) and focused, relating directly to specific central questions. Essays longer than 15 pages long will be proportionally more difficult to fit into the volume. Essays that are three times this size or more are unlikely to fit. Essays should address the scientifically-literate non-specialist and written in a language that is divorced from speculative and irrational line of argumentation. In addition, some authors may be asked to make their submission available for commentary (see below).

(More details)

Thank you for reading this call. Please forward it to individuals who may wish to contribute.

Amnon Eden, School of Computer Science and Electronic Engineering, University of Essex
Johnny Søraker, Department of Philosophy, University of Twente
Jim Moor, Department of Philosophy, Dartmouth College
Eric Steinhart, Department of Philosophy, William Paterson University

Will our lumbering industrial age driven information age segue smoothly into a futuristic marvel of yet to be developed technology? It might. Or take quantum leaps. It could. Will information technology take off exponentially? It’s accelerating in that direction. The way knowledge is unraveling its potential for enhancing human ingenuity, the future looks bright indeed. But there is a problem. It’s that egoistic tendency we have of defending ourselves against knowing, of creating false images to delude ourselves and the world, and of resolving conflict violently. It’s as old as history and may be an inevitable part of life. If so, there will be consequences.

Who has ever seen drama/comedy without obstacles to overcome, conflicts to confront, dilemmas to address, confrontations to endure and the occasional least expected outcome? Just as Shakespeare so elegantly illustrated. Good drama illustrates aspects of life as lived, and we do live with egoistic mental processes that are both limited and limiting. Wherefore it might come to pass that we who are of this civilization might encounter an existential crisis. Or crunch into a bottleneck out of which … will emerge what? Or extinguish civilization with our egoistic conduct acting from regressed postures with splintered perception.

What’s least likely is that we’ll continue cruising along as usual.

Not with massive demographic changes, millions on the move, radical climate changes, major environmental shifts, cyber vulnerabilities, changing energy resources, inadequate clean water and values colliding against each other in a world where future generations of the techno-savvy will be capable of wielding the next generation of weapons of mass destruction.

On the other hand, there are intelligent people passionately pursuing methods of preventing the use of weapons, combating their effects and securing a future in which these problems mentioned above will be solved, and also working towards an advanced civilization.

It’s a race against time.

In the balance hangs nothing less than the future of civilization.

The danger from technology is secondary.

As of now, regardless of theories of international affairs, in one way or another, we inject power into our currency of negotiation, whether it be interpersonal or international, for after all, power is privilege, hard to give up, especially after getting a taste of it, and so we’ll quarrel over power, perhaps fight. Why deny it? The historical record is there for all to see. As for our inner terrors, our tendency to present false egoistic images to the world and of projecting our secret socially unacceptable fantasies on to others, we might just bring to pass what we fear and deny. It’s possible.

Meantime there are certain simple ideas that remain timeless: For example, as infants we exist at the pleasure of parents, big hulks who pick us up and carry us around sometimes lovingly, sometimes resentfully, often ambivalently, and to be sure many of us come to regard Authority with ambivalence. As Authority regards the dependent. A basic premise is that we all want something in a relationship. So what do we as infants want from Authority? How about security in our exploration of life? How about love? If it’s there we don’t have to pay for it. There are no conditions attached. Life, however, is both complicated and complex beyond a few words, and so we negotiate in the ‘best’ way we have at our disposal, which in the early stages of life are non-verbal intuitive methods that in part enter this life with us, genetically determined, epigenetically determined and in part is learned, but once adopted, a certain core approach becomes habitual, buried deeply under layers of later learned social skills, skills that we employ in our adult lives. These skills are however relatively on the surface. Hidden deep inside are secret desires, unfulfilled fantasies, hidden impulses that wouldn’t make sense in adult relationships if expressed openly in words.

It has been said repeatedly that crisis reveals character. Most of the time we get by in crisis, but we each have a ‘breaking point,’ meaning that under severe enduring stress we regress at a certain point, at which time we’ll abandon sophisticated social skills and a part of us will slip into infantile mode, not necessarily visible on the outside. It varies. No one can claim immunity. And acting out of infantile perception in adult situations can have unexpected consequences depending on the early life drama. Which makes life interesting. It also guarantees an interesting future.

Meantime scientists clarify the biology of learning, of short term memory, of long term memory, of the brain working as a whole, of ‘free will’ as we imagine it, but regardless of future directions, at this time we need agency on the personal and social level so as to help stabilize civilization. By agency I mean responsibility for one’s actions. Accountability, including in the face of dilemmas. Throughout the course of our lives from beginning to end we encounter dilemmas.

Consider the dilemmas the Europeans under German occupation faced last century. I use the European situation as an illustration or social paradigm, not to suggest that this situation will recur, nor to suggest that any one ethnic group will be targeted in the future, but I do suggest that if a global crisis hits, we’ll confront moral dilemmas, and so we can learn from those relatively few Europeans who resolved their dilemmas in noble ways, as opposed to the majority who did nothing to help the oppressed.

If a European in German occupied territory helped a Jew he or she and family would be in danger of arrest, torture and death. How about watching one’s spouse and children being tortured? On the other hand, if she or he did not help they would be participating in murder and genocide, and know it. Despite the danger, certain people from several European countries helped the Jews. According to those who interviewed and wrote about the helpers, (see references listed below) the helpers represented a cross section of the community, that is, some were uneducated laborers, some were serving women, some were formally educated, some were professionals, some professed religious convictions, some did not. Well then, what if anything did these noble risk takers have in common? What they shared in common was this: They saw themselves as responsible moral agents, and, acting on an internal locus of moral responsibility, they each acted on their knowledge and compassion and did the ‘right thing.’ It came naturally to them. But doing the ‘right thing’ in the face of life threatening dilemma does not come naturally to everyone. Fortunately it is a behavior that can be learned.

Concomitant with authentic learning, according to research biologists, is the production of brain chemicals that in turn cultivate structural modification in brain cells. A self reinforcing feedback system. In short, learning is part of a dynamic multi-dimensional interaction of input, output, behavioral change, chemicals, structural brain changes and complex adaptation in systems throughout the body. None of which diminishes the idea that we each enter this life with certain desires, potential and perhaps roles to act out, one of which for me is to improve myself.

Good news! I not only am, I become.

Finally, I list some 20th century resources that remain timeless to this day:

Millgram, S. Obedience to Authority: An Experimental View. Harper & Row. 1974.

Oliner, Samuel P. & Pearl. The Altruistic Personality: Rescuers of Jews in Nazi Europe. Free Press, Division of Macmillan. 1998

Fogelman, Eva. Conscience & Courage Anchor Books, Division of Random House. 1994

Block, Gay & Drucker, Malka. Rescuers: Portraits of Moral Courage in the Holocaust. Holms & Meier Publishers, 1992

My book “STRUCTURE OF THE GLOBAL CATASTROPHE Risks of human extinction in the XXI century” is now available through Lulu http://www.lulu.com/product/paperback/structure-of-the-globa…y/11727068 But it also available free on scribd http://www.scribd.com/doc/6250354/STRUCTURE-OF-THE-GLOBAL-CA…I-century– This book is intended to be complete up to date source book on information about existential risks.

The existential risk reduction career network is a career network for those interested in getting a relatively well-paid job and donating substantial amounts (relative to income) to non-profit organizations focused on the reduction of existential risks, in the vein of SIAI, FHI, and the Lifeboat Foundation.

The aim is to foster a community of donors, and to allow donors and potential donors to give each other advice, particularly regarding the pros and cons of various careers, and for networking with like-minded others within industries. For example, someone already working in a large corporation could give a prospective donor advice about how to apply for a job.

Over time, it is hoped that the network will grow to a relatively large size, and that donations to existential risk-reduction from the network will make up a substantial fraction of funding for the beneficiary organizations.

In isolation, individuals may feel like existential risk is too large a problem to make a dent in, but collectively, we can make a huge difference. If you are interested in helping us make a difference, then please check out the network and request an invitation.

Please feel free to contact the organizers at [email protected] with any comments or questions.

The RPG Eclipse Phase includes the “Singularity Foundation” and “Lifeboat Institute” as player factions. Learn more about this game!

P.S. In case you don’t know, there is a Singularity Institute for Artificial Intelligence.


Eclipse Phase is a roleplaying game of post-apocalyptic transhuman conspiracy and horror.

An “eclipse phase” is the period between when a cell is infected by a virus and when the virus appears within the cell and transforms it. During this period, the cell does not appear to be infected, but it is.

Players take part in a cross-faction secret network dubbed Firewall that is dedicated to counteracting “existential risks” — threats to the existence of transhumanity, whether they be biowar plagues, self-replicating nanoswarms, nuclear proliferation, terrorists with WMDs, net-breaking computer attacks, rogue AIs, alien encounters, or anything else that could drive an already decimated transhumanity to extinction.

Perhaps you think I’m crazy or naive to pose this question. But more and more the past few months I’ve begun to wonder if there is a possibility here that this idea may not be too far off the mark.

Not because of some half-baked theory about a global conspiracy or anything of the sort but simply based upon the behavior of many multinational corporations recently and the effects this behavior is having upon people everywhere.

Again, you may disagree but my perspective on these financial giants is that they are essentially predatory in nature and that their prey is any dollar in commerce that they can possibly absorb. The problem is that for anyone in the modern or even quasi-modern world money is nearly as essential as plasma when it comes to our well-being.

It has been clearly demonstrated again and again — all over the world — that when a population has become sufficiently destitute that the survival of the individual is actually threatened violence inevitably occurs. On a large enough scale this sort of violence can erupt into civil war and wars, as we all know too well can spread like a virus across borders, even oceans.

Until fairly recently, corporations were not big enough, powerful enough or sufficiently meshed with our government to push the US population to a point of violence and perhaps we’re not there yet, but between the bank bailout, the housing crisis, the bailouts of the automakers, the subsidies to the big oil companies and ten thousand other government gifts that are coming straight from the taxpayer I fear we are getting ever closer to the brink.

Who knows — it might just take one little thing — like that new one dollar charge many stores have suddenly begun instituting for any purchase using an ATM or credit card — to push us over the edge.

The last time I got hit with one of these dollar charges I thought about the ostensible reason for this — that the credit card company is now charging the merchant more per transaction so the merchant is passing that cost on to you — however this isn’t the whole story. The merchant is actually charging you more than the transaction costs him and even if this is a violation of either the law or the terms and services agreement between the card company and the merchant, the credit card company looks the other way because they are securing a bigger transaction because of what the merchant is doing thus increasing their profits even further.

Death by big blows or a thousand cuts — the question is will we be forced to do something about it before the big corporations eat us alive?

Existential Threats

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

assuming each fire would burn the same area that actually did burn in Hiroshima and assuming an amount of burnable material per person based on various studies.

The implicit assumption is that all buildings react the way the buildings in Hiroshima reacted on that day.

Therefore, the results of Hiroshima are assumed in the Nuclear Winter models.
* 27 days without rain
* with breakfast burners that overturned in the blast and set fires
* mostly wood and paper buildings
* Hiroshima had a firestorm and burned five times more than Nagasaki. Nagasaki was not the best fire resistant city. Nagasaki had the same wood and paper buildings and high population density.
Recommendations
Build only with non-combustible materials (cement and brick that is made fire resistant or specially treated wood). Make the roofs, floors and shingles non-combustible. Add fire retardants to any high volume material that could become fuel loading material. Look at city planning to ensure less fire risk for the city. Have a plan for putting out city wide fires (like controlled flood from dams which are already near cities.)

Continue reading “Nuclear Winter and Fire and Reducing Fire Risks to Cities” | >

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.