Menu

Blog

Archive for the ‘supercomputing’ category: Page 93

Dec 10, 2013

NASA’s Managerial and Leadership Methodology

Posted by in categories: big data, biological, bionic, bioprinting, biotech/medical, bitcoin, business, chemistry, complex systems, cyborgs, economics, education, energy, engineering, environmental, ethics, existential risks, finance, food, futurism, genetics, geopolitics, government, health, information science, life extension, military, philosophy, physics, robotics/AI, science, scientific freedom, security, singularity, space, supercomputing, sustainability, transhumanism, transparency, transportation

This is an excerpt from the conclusion section of, “…NASA’s Managerial and Leadership Methodology, Now Unveiled!..!” by Mr. Andres Agostini, that discusses some management theories and practices. To read the entire piece, just click the link at the end of this illustrated article and presentation:

superman
In addition to being aware and adaptable and resilient before the driving forces reshaping the current present and the as-of-now future, there are some extra management suggestions that I concurrently practice:

1. Given the vast amount of insidious risks, futures, challenges, principles, processes, contents, practices, tools, techniques, benefits and opportunities, there needs to be a full-bodied practical and applicable methodology (methodologies are utilized and implemented to solve complex problems and to facilitate the decision-making and anticipatory process).

The manager must always address issues with a Panoramic View and must also exercise the envisioning of both the Whole and the Granularity of Details, along with the embedded (corresponding) interrelationships and dynamics (that is, [i] interrelationships and dynamics of the subtle, [ii] interrelationships and dynamics of the overt and [iii] interrelationships and dynamics of the covert).

Continue reading “NASA's Managerial and Leadership Methodology” »

Dec 10, 2013

Futuretronium Book

Posted by in categories: bionic, bitcoin, business, complex systems, cyborgs, economics, education, energy, engineering, ethics, existential risks, finance, futurism, genetics, geopolitics, government, information science, nanotechnology, neuroscience, philosophy, physics, policy, posthumanism, science, security, singularity, supercomputing, sustainability, transhumanism, transportation

This is an excerpt from, “Futuretronium Book” by Mr. Andres Agostini, that discusses some management theories and practices with the future-ready perspective. To read the entire piece, just click the link at the end of article:

“…#1 Futuretronium ® and the administration and application of the scientific method without innuendos and in crescendo as fluid points of inflections ascertain that the morrow is a thing of the past…”

ADVERSARIAL
”…#2 Futuretronium ®, subsequently, there is now and here available the unabridged, authoritative eclictation and elucidation of actionable knowledge from and for the incessantly arrhythmic, abrupt, antagonistic, mordant, caustic, and anarchistic future, as well as the contentious interrelationship between such future and the present…”

“…#3 Futuretronium ®, a radical yet rigorous strong-sense and critico-creative «Futures Thinking», systems approach to quintessential understanding of the complexities, subtleties, and intricacies, as well as the opportunities to be exploited out of the driving forces instilling and inflicting perpetual change into twenty-first century…”

Read the full book at http://lnkd.in/ZxV3Sz to further explore these topics and experience future-ready management practices and theories.

Dec 7, 2013

Our Final Invention: How the Human Race Goes and Gets Itself Killed

Posted by in categories: complex systems, defense, ethics, evolution, existential risks, futurism, homo sapiens, human trajectories, posthumanism, robotics/AI, singularity, supercomputing

By Greg Scoblete — Real Clear Technology

We worry about robots.

Hardly a day goes by where we’re not reminded about how robots are taking our jobs and hollowing out the middle class. The worry is so acute that economists are busy devising new social contracts to cope with a potentially enormous class of obsolete humans.

Continue reading “Our Final Invention: How the Human Race Goes and Gets Itself Killed” »

Nov 14, 2013

The Disruptional Singularity

Posted by in categories: business, climatology, complex systems, cosmology, counterterrorism, cybercrime/malcode, defense, economics, education, engineering, ethics, existential risks, finance, futurism, nanotechnology, physics, policy, robotics/AI, science, singularity, supercomputing, sustainability, transparency

(Excerpt)

Beyond the managerial challenges (downside risks) presented by the exponential technologies as it is understood in the Technological Singularity and its inherent futuristic forces impacting the present and the future now, there are also some grave global risks that many forms of management have to tackle with immediately.

These grave global risks have nothing to do with advanced science or technology. Many of these hazards stem from nature and some are, as well, man made.

For instance, these grave global risks ─ embodying the Disruptional Singularity ─ are geological, climatological, political, geopolitical, demographic, social, economic, financial, legal and environmental, among others. The Disruptional Singularity’s major risks are gravely threatening us right now, not later.

Read the full document at http://lnkd.in/bYP2nDC

May 31, 2013

How Could WBE+AGI be Easier than AGI Alone?

Posted by in categories: complex systems, engineering, ethics, existential risks, futurism, military, neuroscience, singularity, supercomputing

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Continue reading “How Could WBE+AGI be Easier than AGI Alone?” »

Mar 31, 2013

American Physical Society (APS) Conference in Denver

Posted by in categories: cosmology, defense, education, engineering, events, general relativity, nuclear energy, particle physics, philosophy, physics, policy, scientific freedom, space, supercomputing

The APS April Meeting 2013, Vol. 58 #4 will be held Saturday–Tuesday, April 13–16, 2013; Denver, Colorado.

I am very pleased to announce that my abstract was accepted and I will be presenting “Empirical Evidence Suggest A Need For A Different Gravitational Theory” at this prestigious conference.

For those of you who can make it to Denver, April 13–16, and are interested in alternative gravitational theories, lets meet up.

I am especially interested in physicists and engineers who have the funding to test gravity modification technologies, proposed in my book An Introduction to Gravity Modification.

Continue reading “American Physical Society (APS) Conference in Denver” »

Mar 19, 2013

Ten Commandments of Space

Posted by in categories: asteroid/comet impacts, biological, biotech/medical, cosmology, defense, education, engineering, ethics, events, evolution, existential risks, futurism, geopolitics, habitats, homo sapiens, human trajectories, life extension, lifeboat, military, neuroscience, nuclear energy, nuclear weapons, particle physics, philosophy, physics, policy, robotics/AI, singularity, space, supercomputing, sustainability, transparency

1. Thou shalt first guard the Earth and preserve humanity.

Impact deflection and survival colonies hold the moral high ground above all other calls on public funds.

2. Thou shalt go into space with heavy lift rockets with hydrogen upper stages and not go extinct.

Continue reading “Ten Commandments of Space” »

Mar 4, 2013

Human Brain Mapping & Simulation Projects: America Wants Some, Too?

Posted by in categories: biological, biotech/medical, complex systems, ethics, existential risks, homo sapiens, neuroscience, philosophy, robotics/AI, singularity, supercomputing

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Continue reading “Human Brain Mapping & Simulation Projects: America Wants Some, Too?” »

Feb 8, 2013

Machine Morality: a Survey of Thought and a Hint of Harbinger

Posted by in categories: biological, biotech/medical, engineering, ethics, evolution, existential risks, futurism, homo sapiens, human trajectories, robotics/AI, singularity, supercomputing

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

Continue reading “Machine Morality: a Survey of Thought and a Hint of Harbinger” »

Sep 6, 2012

GENCODE Apocalypse

Posted by in categories: biological, biotech/medical, business, chemistry, complex systems, counterterrorism, defense, ethics, events, evolution, existential risks, futurism, geopolitics, habitats, homo sapiens, human trajectories, life extension, lifeboat, media & arts, military, open source, policy, space, supercomputing, sustainability, transparency

http://www.sciencedaily.com/releases/2012/09/120905134912.htm

It is a race against time- will this knowledge save us or destroy us? Genetic modification may eventually reverse aging and bring about a new age but it is more likely the end of the world is coming.

The Fermi Paradox informs us that intelligent life may not be intelligent enough to keep from destroying itself. Nothing will destroy us faster or more certainly than an engineered pathogen (except possibly an asteroid or comet impact). The only answer to this threat is an off world survival colony. Ceres would be perfect.

Page 93 of 94First8788899091929394