Archive for the ‘complex systems’ category: Page 18

Jun 9, 2009

Hack-Jet: Losing a commercial airliner in a networked world

Posted by in categories: complex systems, counterterrorism, futurism


When there is a catastrophic loss of an aircraft in any circumstances, there are inevitably a host of questions raised about the safety and security of the aviation operation. The loss of Air France flight 447 off the coast of Brazil with little evidence upon which to work inevitably raises the level of speculation surrounding the fate of the flight. Large-scale incidents such as this create an enormous cloud of data, which has to be investigated in order to discover the pattern of events, which led to the loss (not helped when some of it may be two miles under the ocean surface). So far French authorities have been quick to rule out terrorism it has however, emerged that a bomb hoax against an Air France flight had been made the previous week flying a different route from Argentina. This currently does not seem to be linked and no terrorist group has claimed responsibility. Much of the speculation regarding the fate of the aircraft has focused on the effects of bad weather or a glitch in the fly-by-wire systemthat could have caused the plane to dive uncontrollably. There is however another theory, which while currently unlikely, if true would change the global aviation security situation overnight. A Hacked-Jet.

Given the plethora of software modern jets rely on it seems reasonable to assume that these systems could be compromised by code designed to trigger catastrophic systemic events within the aircraft’s navigation or other critical electronic systems. Just as aircraft have a physical presence they increasingly have a virtual footprint and this changes their vulnerability. A systemic software corruption may account for the mysterious absence of a Mayday call — the communications system may have been offline. Designing airport and aviation security to keep lethal code off civilian aircraft would in the short-term, be beyond any government civil security regime. A malicious code attack of this kind against any civilian airliner would, therefore be catastrophic not only for the airline industry but also for the wider global economy until security caught up with this new threat. The technical ability to conduct an attack of this kind remains highly specialized (for now) but the knowledge to conduct attacks in this mold would be as deadly as WMD and easier to spread through our networked world. Electronic systems on aircraft are designed for safety not security, they therefore do not account for malicious internal actions.

While this may seem the stuff of fiction in January 2008 this broad topic was discussed due to the planned arrival of the Boeing 787, which is designed to be more ‘wired’ –offering greater passenger connectivity. Air Safety regulations have not been designed to accommodate the idea of an attack against on-board electronic systems and the FAA proposed special conditions , which were subsequently commented upon by the Air Line Pilots Association and Airbus. There is some interesting back and forth in the proposed special conditions, which are after all only to apply to the Boeing 787. In one section, Airbus rightly pointed out that making it a safety condition that the internal design of civilian aircraft should ‘prevent all inadvertent or malicious changes to [the electronic system]’ would be impossible during the life cycle of the aircraft because ‘security threats evolve very rapidly’.Boeing responded to these reports in an AP article stating that there were sufficient safeguards to shut out the Internet from internal aircraft systems a conclusion the FAA broadly agreed with - Wired Magazine covered much of the ground. During the press surrounding this the security writer Bruce Schneier commented that, “The odds of this being perfect are zero. It’s possible Boeing can make their connection to the Internet secure. If they do, it will be the first time in the history of mankind anyone’s done that.” Of course securing the airborne aircraft isn’t the only concern when maintenance and diagnostic systems constantly refresh while the aircraft is on the ground. Malicious action could infect any part of this process. While a combination of factors probably led to the tragic loss of flight AF447 the current uncertainty serves to highlight a potential game-changing aviation security scenario that no airline or government is equipped to face.

Continue reading “Hack-Jet: Losing a commercial airliner in a networked world” »

May 30, 2009

Create an AI on Your Computer

Posted by in categories: complex systems, human trajectories, information science, neuroscience, robotics/AI, supercomputing

Singularity Hub

Create an AI on Your Computer

Written on May 28, 2009 – 11:48 am | by Aaron Saenz |

If many hands make light work, then maybe many computers can make an artificial brain. That’s the basic reasoning behind Intelligence Realm’s Artificial Intelligence project. By reverse engineering the brain through a simulation spread out over many different personal computers, Intelligence Realm hopes to create an AI from the ground-up, one neuron at a time. The first waves of simulation are already proving successful, with over 14,000 computers used and 740 billion neurons modeled. Singularity Hub managed to snag the project’s leader, Ovidiu Anghelidi, for an interview: see the full text at the end of this article.

The ultimate goal of Intelligence Realm is to create an AI or multiple AIs, and use these intelligences in scientific endeavors. By focusing on the human brain as a prototype, they can create an intelligence that solves problems and “thinks” like a human. This is akin to the work done at FACETS that Singularity Hub highlighted some weeks ago. The largest difference between Intelligence Realm and FACETS is that Intelligence Realm is relying on a purely simulated/software approach.

Which sort of makes Intelligence Realm similar to the Blue Brain Project that Singularity Hub also discussed. Both are computer simulations of neurons in the brain, but Blue Brain’s ultimate goal is to better understand neurological functions, while Intelligence Realm is seeking to eventually create an AI. In either case, to successfully simulate the brain in software alone, you need a lot of computing power. Blue Brain runs off a high-tech supercomputer, a resource that’s pretty much exclusive to that project. Even with that impressive commodity, Blue Brain is hitting the limit of what it can simulate. There’s too much to model for just one computer alone, no matter how powerful. Intelligence Realm is using a distributed computing solution. Where one computer cluster alone may fail, many working together may succeed. Which is why Intelligence Realm is looking for help.

The AI system project is actively recruiting, with more than 6700 volunteers answering the call. Each volunteer runs a small portion of the larger simulation on their computer(s) and then ships the results back to the main server. BOINC, the Berkeley built distributed computing software that makes it all possible, manages the flow of data back and forth. It’s the same software used for SETI’s distributed computing processing. Joining the project is pretty simple: you just download BOINC, some other data files, and you’re good to go. You can run the simulation as an application, or as part of your screen saver.

Continue reading “Create an AI on Your Computer” »

Apr 5, 2009

On Being Bitten to Death by Ducks

Posted by in categories: biological, complex systems, education, ethics, futurism, policy

(Crossposted on the blog of Starship Reckless)

Working feverishly on the bench, I’ve had little time to closely track the ongoing spat between Dawkins and Nisbet. Others have dissected this conflict and its ramifications in great detail. What I want to discuss is whether scientists can or should represent their fields to non-scientists.

There is more than a dollop of truth in the Hollywood cliché of the tongue-tied scientist. Nevertheless, scientists can explain at least their own domain of expertise just fine, even become major popular voices (Sagan, Hawkin, Gould — and, yes, Dawkins; all white Anglo men, granted, but at least it means they have fewer gatekeepers questioning their legitimacy). Most scientists don’t speak up because they’re clocking infernally long hours doing first-hand science and/or training successors, rather than trying to become middle(wo)men for their disciplines.


Continue reading “On Being Bitten to Death by Ducks” »

Jan 2, 2008

The Enlightenment Strikes Back

Posted by in categories: complex systems, futurism, geopolitics, lifeboat, nanotechnology, open access, sustainability

In a recent conversation on our discussion list, Ben Goertzel, a rising star in artificial intelligence theory, expressed skepticism that we could keep a “modern large-scale capitalist representative democracy cum welfare state cum corporate oligopoly” going for much longer.

Indeed, our complex civilization currently does seem to be under a lot of stress.

Lifeboat Foundation Scientific Advisory Board member and best-selling author David Brin’s reply was quite interesting.

David writes:

Continue reading “The Enlightenment Strikes Back” »

Page 18 of 18First1112131415161718