Toggle light / dark theme

Quantum replicants of responsive systems can be more efficient than classical models, say researchers from the Centre for Quantum Technologies in Singapore, because classical models have to store more past information than is necessary to simulate the future. They have published their findings in npj Quantum Information.

The word ‘replicant’ evokes thoughts of a sci-fi world where society has replaced common creatures with artificial machines that replicate their behaviour. Now researchers from Singapore have shown that if such machines are ever created, they’ll run more efficiently if they harness theory to respond to the environment.

This follows the findings of a team from the Centre for Quantum Technologies (CQT), published 10 February in npj Quantum Information. The team investigated ‘input-output processes’, assessing the mathematical framework used to describe arbitrary devices that make future decisions based on stimuli received from the environment. In almost all cases, they found, a quantum device is more efficient because classical devices have to store more past information than is necessary to simulate the future.

Read more

Early probes are one thing, but can we build a continuing presence among the stars, human or robotic? An evolutionary treatment of starflight sees it growing from a steadily expanding presence right here in our Solar System, the kind of infrastructure Alex Tolley examines in the essay below. How we get to a system-wide infrastructure is the challenge, one analyzed by a paper that sees artificial intelligence and 3D printing as key drivers leading to a rapidly expanding space economy. The subject is a natural for Tolley, who is co-author (with Brian McConnell) of A Design for a Reusable Water-Based Spacecraft Known as the Spacecoach (Springer, 2016). An ingenious solution to cheap transportation among the planets, the Spacecoach could readily be part of the equation as we bring assets available off-planet into our economy and deploy them for even deeper explorations. Alex is a lecturer in biology at the University of California, and has been a Centauri Dreams regular for as long as I can remember, one whose insights are often a touchstone for my own thinking.

By Alex Tolley

alexgetty_2x

Read more

Let’s just go ahead and address the question on everyone’s mind: will AI kill us? What is the negative potential of transhuman superintelligence? Once its cognitive power surpasses our own, will it give us a leg-up in ‘the singularity’, or will it look at our collective track record of harming our own species, other species, the world that gave us life, etc., and exterminate us like pests? AI expert Ben Goertzel believes we’ve been at this point of uncertainty many times before in our evolution. When we stepped out of our caves, it was a risk – no one knew it would lead to cities and space flight. When we spoke the first word, took up agriculture, invented the printing press, flicked the internet on-switch – all of these things could have led to our demise, and in some sense, our eventual demise can be traced all the way back to the day that ancient human learnt how to make fire. Progress helps us, until the day it kills us. That said, fear of negative potential cannot stop us from attempting forward motion – and by now, says Goertzel, it’s too late anyway. Even if the U.S. decided to pull the plug on superhuman intelligence research, China would keep at it. Even if China pulled out, Russia, Australia, Brazil, Nigeria would march on. We know there are massive benefits – both humanitarian and corporate – and we have latched to the idea. “The way we got to this point as a species and a culture has been to keep doing amazing new things that we didn’t fully understand,” says Goertzel, and for better or worse, “that’s what we’re going to keep on doing.” Ben Goertzel’s most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Ben Goertzel’s most recent book is AGI Revolution: An Inside View of the Rise of Artificial General Intelligence.

Read more

If climate change, nuclear weapons or Donald Trump don’t kill us first, there’s always artificial intelligence just waiting in the wings. It’s been a long time worry that when AI gains a certain level of autonomy it will see no use for humans or even perceive them as a threat. A new study by Google’s DeepMind lab may or may not ease those fears.

Read more

It looks like Self Driving cars may create a US organ shortage that finally acts as the Kick in the Ass to force stem cell generated organs on to the market. Enough of the ‘in the future’ we might have these Nonsesne.


Science, however, can offer better a better solution.

The waiting lists for donor organs are long — 120,000 people on a given day — and ever increasing. With fewer donor organs to go around, researchers are working on other ways to get people the parts they need. With help from 3D printing and other bioengineering technologies, we will eventually be able to grow our own organs and stop relying on donors.

Related: How Technology is Tackling Joe Biden’s Cancer Moonshot