Menu

Blog

Archive for the ‘ethics’ category: Page 60

Jan 24, 2016

‘The Terminator Conundrum’: Pentagon Weighs Ethics of Pairing Deadly Force, AI

Posted by in categories: engineering, ethics, military, neuroscience, robotics/AI

DoD spending $12 to $15 billion of its FY17 budget on small bets that includes NextGen tech improvements — WOW. Given the DARPA new Neural Engineering System Design (NESD); guessing we may finally have a Brain Mind Interface (BMI) soldier in the future.


The Defense Department will invest the $12 billion to $15 billion from its Fiscal Year 2017 budget slotted for developing a Third Offset Strategy on several relatively small bets, hoping to produce game-changing technology, the vice chairman of the Joint Chiefs of Staff said.

Read more

Jan 19, 2016

God Does Not Deserve Admiration

Posted by in category: ethics

God does not exist. However, let’s grant for a moment that God is real. Religious texts and practices show that God is wicked, cruel, and immoral, and totally unworthy of affection by moral human beings.

For the sake of brevity, we’ll exclusively consider the God of the New Testament, and ignore the God of the Old Testament, Koran, and other books. This God is often portrayed as hip, cool, and loving. If we dig deeper into some of the basic tenets of Christianity held by mainstream Protestant, Catholic, and Orthodox churches, we’ll see that it’s an elaborate smoke screen. The God of the New Testament is a beast.

The Problem of Evil

Continue reading “God Does Not Deserve Admiration” »

Jan 17, 2016

Machine learning’s hand in touch-less, straight-through processing and beyond

Posted by in categories: employment, ethics, robotics/AI

AI can easily replace much of the back office operations and some front office over time. As a result, there will be a need to have a massive social system and displacement program in place as a joint effort with governments and companies to re-school and re-tool workers and financially support the workers and their families until they can be retooled/ retrained to get one of the existing jobs or one of the new careers resulting from AI. There will be a need and social obligation placed back on companies at a scale like we have never seen before. With power and wealth; there truly comes a level of moral responsibility imposed by society.


Tradeshift CloudScan uses machine learning to create automatic mappings from image files and PDFs into a structured format such as UBL.

Read more

Jan 17, 2016

DIY gene-editing kit: Is it fun or scary?

Posted by in categories: biotech/medical, ethics, food, genetics, habitats

Although the recent article and announcement of Josiah Zayner (CA scientist) new do it yourself gene editing kit for $120 sent shock waves across industry as well as further raised the question “how do we best put controls in place to ensure ethics and prevent disaster or a crisis?”; this genie truly is out of the bottle. Because Josiah created this easily in his own kitchen, it can be replicated by many others in their own homes. What we have to decide is how to best mitigate it’s impact. Black markets & exotic animal, etc. collectors around the world will pay handsomely for this capability and raise the stakes of the most bizarre animals (deadly and non-deadly) to be created for their own profits and amusements.


BURLINGAME, Calif. — On the kitchen table of his cramped apartment, Josiah Zayner is performing the feat that is transforming biology. In tiny vials, he’s cutting, pasting and stirring genes, as simply as mixing a vodka tonic. Next, he slides his new hybrid creations, living in petri dishes, onto a refrigerator shelf next to the vegetables. And he’s packaging and selling his DIY gene-editing technique for $120 so that everyone else can do it, too.

Read more

Dec 21, 2015

Inside OpenAI: Will Transparency Protect Us From Artificial Intelligence Run Amok?

Posted by in categories: Elon Musk, ethics, finance, robotics/AI

Last Friday at the Neural Information and Processing Systems conference in Montreal, Canada, a team of artificial intelligence luminaries announced OpenAI, a non-profit company set to change the world of machine learning.

Backed by Tesla and Space X’s Elon Musk and Y Combinator’s Sam Altman, OpenAI has a hefty budget and even heftier goals. With a billion dollars in initial funding, OpenAI eschews the need for financial gains, allowing it to place itself on sky-high moral grounds.

artificial-general-intelligenceBy not having to answer to industry or academia, OpenAI hopes to focus not just on developing digital intelligence, but also guide research along an ethical route that, according to their inaugural blog post, “benefits humanity as a whole.”

Read more

Dec 17, 2015

Ethics on the near-future battlefield

Posted by in categories: bioengineering, biotech/medical, cyborgs, ethics, food, genetics, military, neuroscience, robotics/AI

US army’s report visualises augmented soldiers & killer robots.


The US Army’s recent report “Visualizing the Tactical Ground Battlefield in the Year 2050” describes a number of future war scenarios that raise vexing ethical dilemmas. Among the many tactical developments envisioned by the authors, a group of experts brought together by the US Army Research laboratory, three stand out as both plausible and fraught with moral challenges: augmented humans, directed-energy weapons, and autonomous killer robots. The first two technologies affect humans directly, and therefore present both military and medical ethical challenges. The third development, robots, would replace humans, and thus poses hard questions about implementing the law of war without any attending sense of justice.

Augmented humans. Drugs, brain-machine interfaces, neural prostheses, and genetic engineering are all technologies that may be used in the next few decades to enhance the fighting capability of soldiers, keep them alert, help them survive longer on less food, alleviate pain, and sharpen and strengthen their cognitive and physical capabilities. All raise serious ethical and bioethical difficulties.

Continue reading “Ethics on the near-future battlefield” »

Dec 16, 2015

Russia, China Building ‘Robot’ Army

Posted by in categories: business, ethics, military, robotics/AI, security

Despite more than a thousand artificial-intelligence researchers signing an open letter this summer in an effort to ban autonomous weapons, Business Insider reports that China and Russia are in the process of creating self-sufficient killer robots, and in turn is putting pressure on the Pentagon to keep up.

“We know that China is already investing heavily in robotics and autonomy and the Russian Chief of General Staff [Valery Vasilevich] Gerasimov recently said that the Russian military is preparing to fight on a roboticized battlefield,” U.S. Deputy Secretary of Defense Robert Work said during a national security forum on Monday.

Work added, “[Gerasimov] said, and I quote, ‘In the near future, it is possible that a complete roboticized unit will be created capable of independently conducting military operations.’”

Read more

Dec 15, 2015

When Will We Look Robots in the Eye?

Posted by in categories: ethics, human trajectories, robotics/AI

In the various incarnations of Douglas Adams’ Hitchhiker’s Guide To The Galaxy, a sentient robot named Marvin the Paranoid Android serves on the starship Heart of Gold. Because he is never assigned tasks that challenge his massive intellect, Marvin is horribly depressed, always quite bored, and a burden to the humans and aliens around him. But he does write nice lullabies.

While Marvin is a fictional robot, Scholar and Author David Gunkel predicts that sentient robots will soon be a fact of life and that mankind needs to start thinking about how we’ll treat such machines, at present and in the future.

For Gunkel, the question is about moral standing and how we decide if something does or does not have moral standing. As an example, Gunkel notes our children have moral standing, while a rock or our smartphone may not have moral consideration. From there, he said, the question becomes, where and how do we draw the line to decide who is inside and who is outside the moral community?

“Traditionally, the qualities for moral standing are things like rationality, sentience (and) the ability to use languages. Every entity that has these properties generally falls into the community of moral subjects,” Gunkel said. “The problem, over time, is that these properties have changed. They have not been consistent.”

Continue reading “When Will We Look Robots in the Eye?” »

Dec 14, 2015

Why Infosys is joining Elon Musk, Y Combinator and others in pledging $1 billion for OpenAI — By Harshith Mallya | YourStory

Posted by in categories: education, ethics, open source, robotics/AI

YourStory-OpenAI

““Our trust in complex systems stems mostly from understanding their predictability, whether it is nuclear reactors, lathe machines, or 18-wheelers; or of course, AI. If complex systems are not open to be used, extended, and learned about, they end up becoming yet another mysterious thing for us, ones that we end up praying to and mythifying. The more open we make AI, the better.””

Read more

Dec 7, 2015

Can The Existential Risk Of Artificial Intelligence Be Mitigated?

Posted by in categories: ethics, existential risks, futurism, government, human trajectories, robotics/AI

It seems like every day we’re warned about a new, AI-related threat that could ultimately bring about the end of humanity. According to Author and Oxford Professor Nick Bostrom, those existential risks aren’t so black and white, and an individual’s ability to influence those risks might surprise you.

Image Credit: TED

Image Credit: TED

Bostrom defines an existential risk as one distinction of earth originating life or the permanent and drastic destruction of our future development, but he also notes that there is no single methodology that is applicable to all the different existential risks (as more technically elaborated upon in this Future of Humanity Institute study). Rather, he considers it an interdisciplinary endeavor.

“If you’re wondering about asteroids, we have telescopes, we can study them with, we can look at past crater impacts and derive hard statistical data on that,” he said. “We find that the risk of asteroids is extremely small and likewise for a few of the other risks that arrive from nature. But other really big existential risks are not in any direct way susceptible to this kind of rigorous quantification.”

Continue reading “Can The Existential Risk Of Artificial Intelligence Be Mitigated?” »

Page 60 of 82First5758596061626364Last