Menu

Blog

Archive for the ‘risks’ tag: Page 3

May 25, 2012

Beyond the Heliosheath: ISM Traverse & The Local Fluff

Posted by in categories: engineering, futurism, space

It’s been a while since anyone contributed a post on space exploration here on the Lifeboat blogs, so I thought I’d contribute a few thoughts on the subject of potential hazards to interstellar travel in the future — if indeed humanity ever attempts to explore that far in space.

It is only recently that the Voyager probes provided us with some idea of the nature of the boundary of our solar system with what is commonly referred to as the local fluff, The Local Interstellar Cloud, through which we have been travelling for the past 100,000 years or so, and which we will continue to travel through for another 10,000 or 20,000 years yet. The cloud has a temperate of about 6000°C — albeit very tenuous.

We are protected by the effects of the local fluff by the solar wind and the sun’s magnetic field, the front between the two just beyond the termination shock where the solar wind slows to subsonic velocities. Here, in the heliosheath, the solar wind becomes turbulent by its interaction with the interstellar medium, and keeping the interstellar medium at bay from the inners of the solar system, the region currently under study by the Voyager 1 and Voyager 2 space probes. It has been hypothesised that there may be a hydrogen wall further out between the bow shock and the heliopause composed of ISM interacting with the edge of the heliosphere, another obstacle to consider with interstellar travel.

The short end of the stick is that what many consider ‘open space’ to traverse once we get beyond the Kuiper belt may in fact be many more mission-threatening obstacles to traverse to reach beyond our solar system. Opinions welcome. I am not an expert on this.

Apr 14, 2012

Earth’s Titanic Challenges

Posted by in categories: asteroid/comet impacts, complex systems, economics, ethics, existential risks, finance, fun, geopolitics, homo sapiens, human trajectories, lifeboat, media & arts, rants
RMS <em>Titanic</em> Sails

What’s to worry? RMS Titanic departs Southampton.

This year marks the 100th anniversary of the Titanic disaster in 1912. What better time to think about lifeboats?

One way to start a discussion is with some vintage entertainment. On the centenary weekend of the wreck of the mega-liner, our local movie palace near the Hudson River waterfront ran a triple bill of classic films about maritime disasters: A Night to Remember, Lifeboat, and The Poseidon Adventure. Each one highlights an aspect of the lifeboat problem. They’re useful analogies for thinking about the existential risks of booking a passage on spaceship Earth.

Can’t happen…

Continue reading “Earth's Titanic Challenges” »

Apr 7, 2012

GadgetBridge — Taming dangerous technologies by pushing them into consumer gadgets

Posted by in categories: biotech/medical, ethics, futurism, geopolitics, human trajectories, neuroscience

GatgetBridge is currently just a concept. It might start its life as a discussion forum, later turn into a network or an organisation and hopefully inspire a range of similar activities.

We will soon be able to use technology to make ourselves more intelligent, feel happier or change what motivates us. When the use of such technologies is banned, the nations or individuals who manage to cheat will soon lord it over their more obedient but unfortunately much dimmer fellows. When these technologies are made freely available, a few terrorists and psychopaths will use them to cause major disasters. Societies will have to find ways to spread these mind enhancement treatments quickly among the majority of their citizens, while keeping them from the few who are likely to cause harm. After a few enhancement cycles, the most capable members of such societies will all be “trustworthy” and use their skills to stabilise the system (see “All In The Mind”).

But how can we manage the transition period, the time in which these technologies are powerful enough to be abused but no social structures are yet in place to handle them? It might help to use these technologies for entertainment purposes, so that many people learn about their risks and societies can adapt (see “Should we build a trustworthiness tester for fun”). But ideally, a large, critical and well-connected group of technology users should be part of the development from the start and remain involved in every step.

To do that, these users would have to spend large amounts of money and dedicate considerable manpower. Fortunately, the basic spending and working patterns are in place: People already use a considerable part of their income to buy consumer devices such as mobile phones, tablet computers and PCs and increasingly also accessories such as blood glucose meters, EEG recorders and many others; they also spend a considerable part of their time to get familiar with these devices. Manufacturers and software developers are keen to turn any promising technology into a product and over time this will surely include most mind measuring and mind enhancement technologies. But for some critical technologies this time might be too long. GadgetBridge is there to shorten it as follows:

Continue reading “GadgetBridge — Taming dangerous technologies by pushing them into consumer gadgets” »

Feb 12, 2012

Badly designed to understand the Universe — CERN’s LHC in critical Reflection by great Philosopher H. Maturana and Astrophysicist R. Malina

Posted by in categories: complex systems, cosmology, education, engineering, ethics, existential risks, futurism, media & arts, particle physics, philosophy, physics, scientific freedom, sustainability

Famous Chilean philosopher Humberto Maturana describes “certainty” in science as subjective emotional opinion and astonishes the physicists’ prominence. French astronomer and “Leonardo” publisher Roger Malina hopes that the LHC safety issue would be discussed in a broader social context and not only in the closer scientific framework of CERN.

(Article published in “oekonews”: http://oekonews.at/index.php?mdoc_id=1067777 )

The latest renowned “Ars Electronica Festival” in Linz (Austria) was dedicated in part to an uncritical worship of the gigantic particle accelerator LHC (Large Hadron Collider) at the European Nuclear Research Center CERN located at the Franco-Swiss border. CERN in turn promoted an art prize with the idea to “cooperate closely” with the arts. This time the objections were of a philosophical nature – and they had what it takes.

In a thought provoking presentation Maturana addressed the limits of our knowledge and the intersubjective foundations of what we call “objective” and “reality.” His talk was spiked with excellent remarks and witty asides that contributed much to the accessibility of these fundamental philosophical problems: “Be realistic, be objective!” Maturana pointed out, simply means that we want others to adopt our point of view. The great constructivist and founder of the concept of autopoiesis clearly distinguished his approach from a solipsistic position.

Continue reading “Badly designed to understand the Universe — CERN's LHC in critical Reflection by great Philosopher H. Maturana and Astrophysicist R. Malina” »

Apr 3, 2010

Natural selection of universes and risks for the parent civilization

Posted by in category: existential risks

Lee Smolin is said to believe (according to personal communication from Danila Medvedev who was told about it by John Smart. I tried to reach Smolin for comments, but failed) that global catastrophe is impossible, based on the following reasoning: the multiverse is dominated by those universes that are able to replicate. This Self-replication occurs in black holes, and in especially in those black holes, which are created civilizations. Thus, the parameters of the universe are selected so that civilization cannot self-destruct before they create black holes. As a result, all physical processes, in which civilization may self-destruct, are closed or highly unlikely. Early version of Smolin’s argument is here: http://en.wikipedia.org/wiki/Lee_Smolin but this early version was refuted in 2004, and so he (probably) added existence of civilization as another condition for cosmic natural selection. Anyway, even if it is not Smolin’s real line of thoughts, it is quite possible line of thoughts.

I think this argument is not persuasive, since the selection can operate both in the direction of universes with more viable civilizations, and in the direction of universes with a larger number of civilizations, just as biological evolution works to more robust offspring in some species (mammals) and in the larger number of offspring with lower viability (plants, for example, dandelion). Since some parameters for the development of civilizations is extremely difficult to adjust by the basic laws of nature (for example, the chances of nuclear war or a hostile AI), but it is easy to adjust the number of emerging civilizations, it seems to me that the universes, if they replicated with the help of civilizations, will use the strategy of dandelions, but not the strategy of mammals. So it will create many unstable civilization and we are most likely one of them (self indication assumption also help us to think so – see recent post of Katja Grace http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/)

But still some pressure can exist for the preservation of civilization. Namely, if an atomic bomb would be as easy to create as a dynamite – much easier then on Earth (which depends on the quantity of uranium and its chemical and nuclear properties, ie, is determined by the original basic laws of the universe), then the chances of the average survival of civilization would be lower. If Smolin’s hypothesis is correct, then we should encounter insurmountable difficulties in creating nano-robots, microelectronics, needed for strong AI, harmful experiments on accelerators with strangelet (except those that lead to the creation of black holes and new universes), and in several other potentially dangerous technology trends that depend on their success from the basic properties of the universe, which may manifest itself in the peculiarities of its chemistry.

In addition, the evolution of universes by Smolin leads to the fact that civilization should create a black hole as early as possible in the course of its history, leading to replication of universes, because the later it happens, the greater the chances that the civilization will self-destruct before it can create black holes. In addition, the civilization is not required to survive after the moment of “replication” (though survival may be useful for the replication, if civilization creates a lot of black holes during its long existence.) From these two points, it follows that we may underestimate the risks from Hadron Collider in the creation of black holes.

Continue reading “Natural selection of universes and risks for the parent civilization” »

Mar 23, 2010

Risk intelligence

Posted by in categories: education, events, futurism, geopolitics, policy, polls

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.

Mar 12, 2010

Reduction of human intelligence as global risk

Posted by in categories: existential risks, neuroscience

Another risk is loss of human rationality, while preserving human life. In a society there are always so many people with limited cognitive abilities, and most of the achievements are made by a small number of talented people. Genetic and social degradation, reducing the level of education, loss of skills of logic can lead to a temporary decrease in intelligence of individual groups of people. But as long as humanity is very large in population, it is not so bad, because there will always be enough intelligent people. Significant drop in population after nonglobal disaster may exacerbate this problem. And the low intelligence of the remaining people will reduce their chances of survival. Of course, one can imagine such an absurd situation that people are so degraded that by the evolutionary path new species arise from us, which is not having a full-fledged intelligence — and that back then this kind of evolving reasonable, developed a new intelligence.
More dangerous is decline of intelligence because of the spread of technological contaminants (or use of a certain weapon). For example, I should mention constantly growing global arsenic contamination, which is used in various technological processes. Sergio Dani wrote about this in his article “Gold, coal and oil.” http://sosarsenic.blogspot.com/2009/11/gold-coal-and-oil-reg…is-of.html, http://www.medical-hypotheses.com/article/S0306-9877 (09) 00666–5/abstract
Disengaged during the extraction of gold mines in the arsenic remains in the biosphere for millennia. Dani binds arsenic with Alzheimer’s disease. In his another paper is demonstrated that increasing concentrations of arsenic leads to an exponential increase in incidence of Alzheimer’s disease. He believes that people are particularly vulnerable to arsenic poisoning, as they have large brains and longevity. If, however, according to Denis, in the course of evolution, people will adapt to high levels of arsenic, it will lead to a decline in the brain and life expectancy, resulting in the intellect of people will be lost.
In addition to arsenic contamination occurs among many other neurotoxic substances — CO, CO2, methane, benzene, dioxin, mercury, lead, etc. Although the level of pollution by each of them separately is below health standards, the sum of the impacts may be larger. One reason for the fall of the Roman Empire was called the total poisoning of its citizens (though not barbarians) of lead from water pipes. Of course, they could not have knowledge about these remote and unforeseen consequences — but we also may not know about the many consequences of our affairs.
In addition to dementia is alcohol and most drugs, many drugs (eg, side effect in the accompanying sheets of mixtures of heartburn called dementia). Also rigid ideological system, or memes.
Number of infections, particularly prion, also leads to dementia.
Despite this, the average IQ of people is growing as life expectancy.

Mar 10, 2010

Why AI could fail?

Posted by in category: robotics/AI

AI is our best hope for long term survival. If we fail to create it, it will happened by some reason. Here I suggest the complete list of possible causes of failure, but I do not believe in them. (I was inspired bu V.Vinge artile “What if singularity does not happen”?)

I think most of these points are wrong and AI finaly will be created.

Technical reasons:
1) Moore’s Law will stop by physical causes earlier than would be established sufficiently powerful and inexpensive apparatus for artificial intelligence.
2) Silicon processors are less efficient than neurons to create artificial intelligence.
3) Solution of the AI cannot be algorithmically parallelization and as a result of the AI will be extremely slow.

Philosophy:
4) Human beings use some method of processing information, essentially inaccessible to algorithmic computers. So Penrose believes. (But we can use this method using bioengineering techniques.) Generally, the final recognition of the impossibility of creating artificial intelligence would be tantamount to recognizing the existence of the soul.
5) The system cannot create a system more complex then themselves, and so the people cannot create artificial intelligence, since all the proposed solutions are too simple. That is, AI is in principle possible, but people are too stupid to do it. In fact, one reason for past failures in the creation of artificial intelligence is that people underestimate the complexity of the problem.
6) AI is impossible, because any sufficiently complex system reveals the meaninglessness of existence and stops.
7) All possible ways to optimize are exhausted.AI does not have any fundamental advantage in comparison with the human-machine interface and has a limited scope of use.
8. The man in the body has a maximum level of common sense, and any incorporeal AIs are or ineffective, or are the models of people.
9) AI is created, but has no problems, which he could and should be addressed. All the problems have been solved by conventional methods, or proven uncomputable.
10) AI is created, but not capable of recursive self-optimization, since this would require some radically new ideas, but they had not. As a result, AI is there, or as a curiosity, or as a limited specific applications, such as automatic drivers.
11) The idea of artificial intelligence is flawed, because it has no precise definition or even it is an oxymoron, like “artificial natural.” As a result, developing specific goals or to create models of man, but not universal artificial intelligence.
12) There is an upper limit of the complexity of systems for which they have become chaotic and unstable, and it slightly exceeds the intellect of the most intelligent people. AI is slowly coming to this threshold of complexity.
13) The bearer of intelligence is Qualia. For our level of intelligence should be a lot events that are indescribable and not knowable, but superintellect should understand them, by definition, otherwise it is not superintellect, but simply a quick intellect.

Continue reading “Why AI could fail?” »

Page 3 of 3123