Toggle light / dark theme

Article: Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Harnessing “Black Holes”: The Large Hadron Collider – Ultimate Weapon of Mass Destruction

Why the LHC must be shut down

CERN-Critics: LHC restart is a sad day for science and humanity!

PRESS RELEASE “LHC-KRITIK”/”LHC-CRITIQUE” www.lhc-concern.info
CERN-Critics: LHC restart is a sad day for science and humanity!
These days, CERN has restarted the world’s biggest particle collider, the so-called “Big Bang Machine” LHC at CERN. After a hundreds of Million Euros upgrade of the world’s biggest machine, CERN plans to smash particles at double the energies of before. This poses, one would hope, certain eventually small (?), but fundamentally unpredictable catastrophic risks to planet Earth.
Basically the same group of critics, including Professors and Doctors, that had previously filed a law suit against CERN in the US and Europe, still opposes the restart for basically the same reasons. Dangers of: (“Micro”-)Black Holes, Strangelets, Vacuum Bubbles, etc., etc. are of course and maybe will forever be — still in discussion. No specific improvements concerning the safety assessment of the LHC have been conducted by CERN or anybody meanwhile. There is still no proper and really independent risk assessment (the ‘LSAG-report’ has been done by CERN itself) — and the science of risk research is still not really involved in the issue. This is a scientific and political scandal and that’s why the restart is a sad day for science and humanity.
The scientific network “LHC-Critique” speaks for a stop of any public sponsorship of gigantomanic particle colliders.
Just to demonstrate how speculative this research is: Even CERN has to admit, that the so called “Higgs Boson” was discovered — only “probably”. Very probably, mankind will never find any use for the “Higgs Boson”. Here we are not talking about the use of collider technology in medical concerns. It could be a minor, but very improbable advantage for mankind to comprehend the Big Bang one day. But it would surely be fatal – how the Atomic Age has already demonstrated — to know how to handle this or other extreme phenomena in the universe.
Within the next Billions of years, mankind would have enough problems without CERN.
Sources:
- A new paper by our partner “Heavy Ion Alert” will be published soon: http://www.heavyionalert.org/
- Background documents provided by our partner “LHC Safety Review”: http://www.lhcsafetyreview.org/

- Press release by our partner ”Risk Evaluation Forum” emphasizing on renewed particle collider risk: http://www.risk-evaluation-forum.org/newsbg.pdf

- Study concluding that “Mini Black Holes” could be created at planned LHC energies: http://phys.org/news/2015-03-mini-black-holes-lhc-parallel.html

- New paper by Dr. Thomas B. Kerwick on lacking safety argument by CERN: http://vixra.org/abs/1503.0066

- More info at the LHC-Kritik/LHC-Critique website: www.LHC-concern.info
Best regards:
LHC-Kritik/LHC-Critique

Is Immortality GOOD or BAD?

Vicki Turk & Brian Anderson | Motherboard
“That’s another basic thing that the doom-and-gloom, death-is-preferable-to-the-future crowd seem to misunderstand. The world won’t just stay the same, with everyone trudging along in a state of boredom; it’ll keep changing. There’ll be new stuff to do because we’ll keep making new stuff. We’ll get those jetpacks we were promised, and that’s just the start.” Read more

Human Laws Can’t Control Killer Robots, New Report Says

Kari Paul | Motherboard


”​When a human being is killed by an autonomous machine, who takes the blame? Human rights non-governmental organization Human Rights Watch says it is virtually impossible to tell, and that presents unprecedented danger in the future of warfare. The group released a report today showing how difficult it will be to hold commanders, operators, programmers or manufacturers legally responsible for crimes committed by autonomous machines under current legislature.” Read more

App Maps Addresses Of Anti-Gun Violence Activists

By — Fast Company
On Thursday morning, a handful of anti-gun-violence activists realized there is an app in the Google Play Store with their names on it—literally. The app, Gunfree Geo Marker, features a map pinpointing the home and work addresses of politicians, gun control organization employees, and “random anti-gun trolls” who “push the anti-gun agenda in any way, shape or form.”

Clicking on a person’s name in the menu reveals their address on a Google map, along with the app creator’s reasons for including that person in the app.Read more

Space Privatization, Tourism And Morals

By: Leigh Cooper — Inside Science
Novel technologies, innovative engineering and breathtaking discoveries could be the story of the next 100 years of space exploration. But space travel involves more than math, telescopes and rovers according to the speakers at a session at last month’s annual meeting of the American Association for the Advancement of Science in San Jose, California. Modern space exploration mixes together governments and private companies, science and ethics, promise and possibilities.

Chris Impey, an astronomer at the University of Arizona in Tucson, thinks that the desire to explore, which has pushed humans to cross oceans and conquer mountains, will continue to propel humans into space.

“I think what is happening now is as profound as the transition that took place among hunter gatherers when they left Africa 50 or 60 thousand years ago,” said Impey. “It took an amazing short time – just a couple hundred generations – for simple tribal units of 50 or 100 to spread essentially across the Earth.“Read more

Can We Trust Robot Cars to Make Hard Choices?

By - SigularityHub

The ethics of robot cars has been a hot topic recently. In particular, if a robot car encounters a situation where it is forced to hit one person or another—which should it choose and how does it make that choice? It’s a modern version of the trolley problem, which many have studied in introductory philosophy classes.

Imagine a robot car is driving along when two people run out onto the road, and the car cannot avoid hitting one or the other. Assume neither person can get away, and the car cannot detect them in advance. Various thinkers have suggested how to make an ethical decision about who the car should hit:

  • The robot car could run code to make a random decision.
  • The robot car could hand off control to a human passenger.
  • The robot car could make a decision based on a set of pre-programmed values by the car’s designers or a set of values programmed by the owner.

The last of these deserves a little more detail. What would these values be like?

Read more

Illegal, Immoral, and Here to Stay: Counterfeiting and the 3D Printing Revolution

By Josh Greenbaum — Wired

If you’re looking for a way to gauge how the 3D printing market will evolve, look no further than the dawn of two other revolutionizing technologies – the desktop printing market and the VHS standard. And be prepared for a decidedly off-color story.

While many of us have fond memories of watching a favorite movie when it first came out on VHS, or admiring the first three-color party invitation we printed on a laser printer, the fact remains that innocent pursuits were not the sole reason either of these technologies took off. And we shouldn’t expect 3D printing to be any different.
Read more

Will Robots Be Able to Help Us Die?

Graham Templeton — Motherboard

The robot stares down at the sickly old woman from its perch above her home care bed. She winces in pain and tries yet again to devise a string of commands that might trick the machine into handing her the small orange bottle just a few tantalizing feet away. But the robot is a specialized care machine on loan from the hospital. It regards her impartially from behind a friendly plastic face, assessing her needs while ignoring her wants.

If only she’d had a child, she thinks for the thousandth time, maybe then there’d be someone left to help her kill herself.

Hypothetical scenarios such as this inspired a small team of Canadian and Italian researchers to form the ​Open Roboethics Initiative (ORi). Based primarily out of the University of British Columbia (UBC), the organization is just over two years old. The idea is not that robotics experts know the correct path for a robot to take at every robo-ethical crossroad—but rather, that robotics experts do not.

“Ethics is an emergent property of a society,” said ORi board member and UBC professor Mike Van der Loos. “It needs to be studied to be understood.” Read more

/* */