Menu

Blog

Page 12062

Mar 14, 2011

“CERN Ignores Scientific Proof That Its Current Experiment Puts Earth in Jeopardy”

Posted by in categories: existential risks, particle physics

I deeply feel with the Japanese victims of a lack of human caution regarding nuclear reactors. Is it compatible with this atonement if I desperately ask the victims to speak up with me against the next consciously incurred catastrophe made in Switzerland? If the proof of danger stays un-disproved, CERN is currently about to melt the earth’s mantle along with its core down to a 2-cm black hole in perhaps 5 years time at a probability of 8 percent. A million nuclear power plants pale before the “European Centre for Nuclear Research.” CERN must not be allowed to go on shunning the scientific safety conference sternly advised by a Cologne court only six weeks ago.

I thank Lifeboat for distributing this message worldwide.

Mar 12, 2011

Five Results on Mini-Black Holes Left Undiscussed by CERN for 3 Years

Posted by in categories: existential risks, particle physics

1) Mini black holes are both non-evaporating and uncharged.

2) The new unchargedness makes them much more likely to arise in the LHC (since electrons are no longer point-shaped in confirmation of string theory).

3) When stuck inside matter, mini black holes grow exponentially as “miniquasars” to shrink earth to 2 cm in perhaps 5 years time.

4) They go undetected by CERN’s detectors.

Continue reading “Five Results on Mini-Black Holes Left Undiscussed by CERN for 3 Years” »

Mar 10, 2011

“Too Late for the Singularity?”

Posted by in categories: existential risks, lifeboat, particle physics

Ray Kurzweil is unique for having seen the unstoppable exponential growth of the computer revolution and extrapolating it correctly towards the attainment of a point which he called “singularity” and projects about 50 years into the future. At that point, the brain power of all human beings combined will be surpassed by the digital revolution.

The theory of the singularity has two flaws: a reparable and a hopefully not irreparable one. The repairable one has to do with the different use humans make of their brains compared to that of all animals on earth and presumably the universe. This special use can, however, be clearly defined and because of its preciousness be exported. This idea of “galactic export” makes Kurzweil’s program even more attractive.

The second drawback is nothing Ray Kurzweil has anything to do with, being entirely the fault of the rest of humankind: The half century that the singularity still needs to be reached may not be available any more.

The reason for that is CERN. Even though presented in time with published proofs that its proton-colliding experiment will with a probability of 8 percent produce a resident exponentially growing mini black hole eating earth inside out in perhaps 5 years time, CERN prefers not to quote those results or try and dismantle them before acting. Even the call by an administrative court (Cologne) to convene the overdue scientific safety conference before continuing was ignored when CERN re-ignited the machine a week ago.

Continue reading “"Too Late for the Singularity?"” »

Feb 25, 2011

Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction

Posted by in categories: complex systems, existential risks, information science, robotics/AI

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Continue reading “Security and Complexity Issues Implicated in Strong Artificial Intelligence, an Introduction” »

Feb 17, 2011

The Global Brain and its role in Human Immortality

Posted by in categories: biological, biotech/medical, complex systems, futurism, life extension, neuroscience

It would be helpful to discuss these theoretical concepts because there could be significant practical and existential implications.

The Global Brain (GB) is an emergent world-wide entity of distributed intelligence, facilitated by communication and the meaningful interconnections between millions of humans via technology (such as the internet).

For my purposes I take it to mean the expressive integration of all (or the majority) of human brains through technology and communication, a Metasystem Transition from the human brain to a global (Earth) brain. The GB is truly global not only in geographical terms but also in function.

It has been suggested that the GB has clear analogies with the human brain. For example, the basic unit of the human brain (HB) is the neuron, whereas the basic unit of the GB is the human brain. Whilst the HB is space-restricted within our cranium, the GB is constrained within this planet. The HB contains several regions that have specific functions themselves, but are also connected to the whole (e.g. occipital cortex for vision, temporal cortex for auditory function, thalamus etc.). The GB contains several regions that have specific functions themselves, but are connected to the whole (e.g. search engines, governments, etc.).

Continue reading “The Global Brain and its role in Human Immortality” »

Feb 10, 2011

New Implication of Einstein’s Happiest Thought Is Last Hope for Planet

Posted by in categories: existential risks, particle physics

Einstein saw that clocks located “more downstairs” in an accelerating rocket predictably tick slower. This was his “happiest thought” as he often said.

However,as everything looks normal on the lower floor, the normal-appearing photons generated there do actually have less mass-energy. So do all local masses there by general covariance, and hence also all associated charges down there.

The last two implications were overlooked for a century. “This cannot be,” more than 30 renowned scientists declared, to let a prestigious experiment with which they have ties appear innocuous.

This would make for an ideal script to movie makers and for a bonanza to metrologists. But why the political undertones above? Because, like the bomb, this new crumb from Einstein’s table has a potentially unbounded impact. Only if it gets appreciated within a few days time, all human beings — including the Egyptians — can breathe freely again.

Continue reading “New Implication of Einstein's Happiest Thought Is Last Hope for Planet” »

Feb 9, 2011

Mixed Messages: Tantrums of an Angry Sun

Posted by in categories: business, events, geopolitics, particle physics, policy, space

When examining the delicate balance that life on Earth hangs within, it is impossible not to consider the ongoing love/hate connection between our parent star, the sun, and our uniquely terraqueous home planet.

On one hand, Earth is situated so perfectly, so ideally, inside the sun’s habitable zone, that it is impossible not to esteem our parent star with a sense of ongoing gratitude. It is, after all, the onslaught of spectral rain, the sun’s seemingly limitless output of charged particles, which provide the initial spark to all terrestrial life.

Yet on another hand, during those brief moments of solar upheaval, when highly energetic Earth-directed ejecta threaten with destruction our precipitously perched technological infrastructure, one cannot help but eye with caution the potentially calamitous distance of only 93 million miles that our entire human population resides from this unpredictable stellar inferno.

Continue reading “Mixed Messages: Tantrums of an Angry Sun” »

Feb 8, 2011

GC Lingua Franca(s)

Posted by in categories: futurism, open source

This is an email to the Linux kernel mailing list, but it relates to futurism topics so I post a copy here as well.
———
Science doesn’t always proceed at the speed of thought. It often proceeds at sociological or even demographic speed. — John Tooby

Open Letter to the LKML;

If we were already talking to our computers, etc. as we should be, I wouldn’t feel a need to write this to you. Given current rates of adoption, Linux still seems a generation away from being the priceless piece of free software useful to every child and PhD. This army your kernel enables has millions of people, but they often lose to smaller proprietary armies, because they are working inefficiently. My mail one year ago (http://keithcu.com/wordpress/?p=272) listed the biggest workitems, but I realize now I should have focused on one. In a sentence, I have discovered that we need GC lingua franca(s). (http://www.merriam-webster.com/dictionary/lingua%20franca)

Every Linux success builds momentum, but the desktop serves as a powerful daily reminder of the scientific tradition. Many software PhDs publish papers but not source, like Microsoft. I attended a human genomics conference and found that the biotech world is filled with proprietary software. IBM’s Jeopardy-playing Watson is proprietary, like Deep Blue was. This topic is not discussed in any of the news articles, as if the license does not matter. I find widespread fear of having ideas stolen in the software industry, and proprietary licenses encourage this. We need to get these paranoid programmers, hunched in the shadows, scribbled secrets clutched in their fists, working together, for any of them to succeed. Desktop world domination is not necessary, but it is sufficient to get robotic chaffeurs and butlers. Windows is not the biggest problem, it is the proprietary licensing model that has infected computing, and science.

Continue reading “GC Lingua Franca(s)” »

Feb 1, 2011

Human Biological Immortality in 50 years

Posted by in categories: biological, complex systems, futurism

I believe that death due to ageing is not an absolute necessity of human nature. From the evolutionary point of view, we age because nature withholds energy for somatic (bodily) repairs and diverts it to the germ-cells (in order to assure the survival and evolution of the DNA). This is necessary so that the DNA is able to develop and achieve higher complexity.

Although this was a valid scenario until recently, we have now evolved to such a degree that we can use our intellect to achieve further cognitive complexity by manipulating our environment. This makes it unnecessary for the DNA to evolve along the path of natural selection (which is a slow and cumbersome, ‘hit-and-miss’ process), and allows us to develop quickly and more efficiently by using our brain as a means for achieving higher complexity. As a consequence, death through ageing becomes an illogical and unnecessary process. Humans must live much longer than the current lifespan of 80–120 years, in order for a more efficient global evolutionary development to take place.

It is possible to estimate how long the above process will take to mature (see figure below). Consider that the creation of the DNA was approximately 2 billion years ago, the formation of a neuron (cell) several million years ago, that of an effective brain (Homo sapiens sapiens) 200 000 years ago, and the establishment of complex societies (Ancient Greece, Rome, China etc.) thousands of years ago. There is a logarithmic reduction of the time necessary to proceed to the next more complex step (a reduction by a factor of 100). This means that global integration (and thus indefinite lifespans) will be achieved in a matter of decades (and certainly less than a century), starting from the 1960s-1970s (when globalisation in communications, travel and science/technology started to became established). This leaves another maximum of 50 years before the full global integration becomes established.

Each step is associated with a higher level of complexity, and takes a fraction of the timein order to mature, compared to the previous one.

Continue reading “Human Biological Immortality in 50 years” »

Jan 30, 2011

Summary of My Scientific Results on the LHC-Induced Danger to the Planet

Posted by in categories: existential risks, particle physics

- submitted to the District Attorney of Tubingen, to the Administrative Court of Cologne, to the Federal Constitutional Court (BVerfG) of Germany, to the International Court for Crimes Against Humanity, and to the Security Council of the United Nations -

by Otto E. Rössler, Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle A, 72076 Tubingen, Germany

The results of my group represent fundamental research in the fields of general relativity, quantum mechanics and chaos theory. Several independent findings obtained in these disciplines do jointly point to a danger — almost as if Nature had posed a trap for humankind if not watching out.

MAIN RESULT. It concerns BLACK HOLES and consists of 10 sub-results

Continue reading “Summary of My Scientific Results on the LHC-Induced Danger to the Planet” »