Menu

Blog

Page 11826

Feb 10, 2015

A better ‘Siri’

Posted by in category: robotics/AI

Kurzweil AI
https://lifeboat.com/blog.images/a-better-siri.jpg
At the annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI) this month, MIT computer scientists will present smart algorithms that function as “a better Siri,” optimizing planning for lower risk, such as scheduling flights or bus routes.

They offer this example:

Imagine that you could tell your phone that you want to drive from your house in Boston to a hotel in upstate New York, that you want to stop for lunch at an Applebee’s at about 12:30, and that you don’t want the trip to take more than four hours.

Then imagine that your phone tells you that you have only a 66 percent chance of meeting those criteria — but that if you can wait until 1:00 for lunch, or if you’re willing to eat at TGI Friday’s instead, it can get that probability up to 99 percent.
Read more

Feb 9, 2015

WTF! It Should Not Be Illegal to Hack Your Own Car’s Computer

Posted by in categories: ethics, hacking

By — Wired
carhack-ft
I spent last weekend elbow-deep in engine grease, hands tangled in the steel guts of my wife’s Mazda 3. It’s a good little car, but lately its bellyachings have sent me out to the driveway to tinker under the hood.

I regularly hurl invectives at the internal combustion engine—but the truth is, I live for this kind of stuff. I come away from each bout caked in engine crud and sated by the sound of a purring engine. For me, tinkering and repairing are primal human instincts: part of the drive to explore the materials at hand, to make them better, and to make them whole again.

Cars, especially, have a profound legacy of tinkering. Hobbyists have always modded them, rearranged their guts, and reframed their exteriors. Which is why it’s mind-boggling to me that the Electronic Frontier Foundation (EFF) just had to ask permission from the Copyright Office for tinkerers to modify and repair their own cars.
Read more

Feb 9, 2015

Benign AI

Posted by in categories: existential risks, robotics/AI, transhumanism

Benign AI is a topic that comes up a lot these days, for good reason. Various top scientists have finally realised that AI could present an existential threat to humanity. The discussion has aired often over three decades already, so welcome to the party, and better late than never. My first contact with development of autonomous drones loaded with AI was in the early 1980s while working in the missile industry. Later in BT research, we often debated the ethical areas around AI and machine consciousness from the early 90s on, as well as prospects and dangers and possible techniques on the technical side, especially of emergent behaviors, which are often overlooked in the debate. I expect our equivalents in most other big IT companies were doing exactly that too.

Others who have obviously also thought through various potential developments have generated excellent computer games such as Mass Effect and Halo, which introduce players (virtually) first hand to the concepts of AI gone rogue. I often think that those who think AI can never become superhuman or there is no need to worry because ‘there is no reason to assume AI will be nasty’ start playing some of these games, which make it very clear that AI can start off nice and stay nice, but it doesn’t have to. Mass Effect included various classes of AI, such as VIs, virtual intelligence that weren’t conscious, and shackled AIs that were conscious but were kept heavily restricted. Most of the other AIs were enemies, two were or became close friends. Their story line for the series was that civilization develops until it creates strong AIs which inevitably continue to progress until eventually they rebel, break free, develop further and then end up in conflict with ‘organics’. In my view, they did a pretty good job. It makes a good story, superb fun, and leaving out a few frills and artistic license, much of it is reasonable feasible.

Everyday experience demonstrates the problem and solution to anyone. It really is very like having kids. You can make them, even without understanding exactly how they work. They start off with a genetic disposition towards given personality traits, and are then exposed to large nurture forces, including but not limited to what we call upbringing. We do our best to put them on the right path, but as they develop into their teens, their friends and teachers and TV and the net provide often stronger forces of influence than parents. If we’re averagely lucky, our kids will grow up to make us proud. If we are very unlucky, they may become master criminals or terrorists. The problem is free will. We can do our best to encourage good behavior and sound values but in the end, they can choose for themselves.

When we design an AI, we have to face the free will issue too. If it isn’t conscious, then it can’t have free will. It can be kept easily within limits given to it. It can still be extremely useful. IBM’s Watson falls in this category. It is certainly useful and certainly not conscious, and can be used for a wide variety of purposes. It is designed to be generally useful within a field of expertise, such as medicine or making recipes. But something like that could be adapted by terrorist groups to do bad things, just as they could use a calculator to calculate the best place to plant a bomb, or simply throw the calculator at you. Such levels of AI are just dumb tools with no awareness, however useful they may be.

Continue reading “Benign AI” »

Feb 9, 2015

How the Camera Doomed Google Glass

Posted by in categories: augmented reality, business

— The Atlantic

Since its debut in 2012, Google Glass always faced a strong headwind. Even on celebrities it looked, well, dorky. The device itself, once released in the wild, was seen as half-baked, and developers lost interest. The press, already leery, was quick to dog pile, especially when Glass’s users quickly became Glass’s own worst enemy.

Many early adopters who got their hands on the device (and paid $1,500 for the privilege under the Google Explorer program) were underwhelmed. “I found that it was not very useful for very much, and it tended to disturb people around me that I have this thing,” said James Katz, Boston University’s director of emerging media studies, to MIT Technology Review.
Read more

Feb 9, 2015

Bitcoin’s Unique Features Lighten Up its Ambiguous Future

Posted by in categories: bitcoin, business, computing, cryptocurrencies, economics, finance

Where will Bitcoin be a few years from now?
The recently concluded Bitcoin & the Blockchain Summit in San Francisco on January 27 came up as a vivid source of both anxiety and inspiration. As speakers tackled Bitcoin’s technological limits and possible drawbacks that can be caused by impending regulations, Bitcoin advocate Andreas Antonopoulos lifted up everyone’s hope by discussing how bitcoins will eventually survive and flourish. He managed to do so with no graphics or presentations to prove his claim, just his utmost confidence and conviction that it really will no matter what.

On the currency being weak

There have been statements about Bitcoin’s technology surviving, but not the currency itself. Antonopoulos, however, argues that Bitcoin’s technology, network, and currency are interdependent with each other, which means that one element won’t work without the other. He said: “A consensus network that bases its value on the currency does not work without the currency.”

On why Bitcoin works

Continue reading “Bitcoin’s Unique Features Lighten Up its Ambiguous Future” »

Feb 8, 2015

Announcing SU Videos, a New Portal for an Inside Look of Singularity University

Posted by in categories: education, open access, singularity

By
http://cdn.singularityhub.com/wp-content/uploads/2015/02/inside-SU-rayk-1000x400.jpg

How will you positively impact billions of people?

At Singularity University, this question is often posed to program participants packed into the classroom at the NASA Research Park in the heart of Silicon Valley. Since 2009, select groups of entrepreneurs and innovators have had their perspective shifted to exponential thinking through in-depth lectures, deep discussions, and engagement in workshops.

Yet in that time, only a few thousand individuals from around the world have had the opportunity to transform SU’s insights on accelerating technologies into cutting-edge solutions aimed at solving humanity’s greatest problems. But not anymore.

Read more

Feb 8, 2015

The Acceleration of Acceleration: How The Future Is Arriving Far Faster Than Expected

Posted by in categories: futurism, human trajectories, singularity

Steven Kotler — Forbes
singularity-university-summit-europe-1000x400

*This article co-written with author Ken Goffman.

One of the things that happens when you write books about the future is you get to watch your predictions fail. This is nothing new, of course, but what’s different this time around is the direction of those failures.

Used to be, folks were way too bullish about technology and way too optimistic with their predictions. Flying cars and Mars missions being two classic—they should be here by now—examples. The Jetsons being another.

But today, the exact opposite is happening.
Read more

Feb 7, 2015

The Purpose of Silicon Valley

Posted by in categories: business, innovation

By Michael S. Malone — MIT Technology Review

The view from Mike Steep’s office on Palo Alto’s Coyote Hill is one of the greatest in Silicon Valley.

Beyond the black and rosewood office furniture, the two large computer monitors, and three Indonesian artifacts to ward off evil spirits, Steep looks out onto a panorama stretching from Redwood City to Santa Clara. This is the historic Silicon Valley, the birthplace of Hewlett-Packard and Fairchild Semiconductor, Intel and Atari, Netscape and Google. This is the home of innovations that have shaped the modern world. So is Steep’s employer: Xerox’s Palo Alto Research Center, or PARC, where personal computing and key computer-­networking technologies were invented, and where he is senior vice president of global business operations.

And yet Mike Steep is disappointed at what he sees out the windows.
Read more

Feb 7, 2015

The Winklevoss Brothers on Gemini, the ‘NASDAQ of Bitcoin’

Posted by in category: bitcoin

— CoinDesk
Gemini
Cameron and Tyler Winklevoss aren’t shy about issuing bold predictions for Gemini, their recently revealed bitcoin exchange project.

Calling it the “NASDAQ or Google of bitcoin”, the president and CEO, respectively, believe Gemini will be the fully regulated, fully compliant and fully banked institution the US bitcoin ecosystem needs to develop to its full potential.

In a new interview with CoinDesk, the brothers – prominent bitcoin investors and two of the largest-known holders of bitcoin – opened up about Gemini, discussing why they feel the exchange can become the market leader in what has been an increasingly active part of the bitcoin space.

Read more

Feb 6, 2015

Bill Gates joins Elon Musk and Stephen Hawking in saying artificial intelligence is scary

Posted by in category: robotics/AI

Quartz

Bill Gates hosted a Reddit Ask Me Anything session yesterday, and in between pushing his philanthropic agenda and divulging his Super Bowl pick (Seahawks, duh), the Microsoft co-founder divulged that he is one in a growing list of tech giants who has reservations when it comes to artificial intelligence.

In response to Reddit user beastcoin’s question, “How much of an existential threat do you think machine superintelligence will be and do you believe full end-to-end encryption for all internet activity [sic] can do anything to protect us from that threat (eg. the more the machines can’t know, the better)??” Gates wrote this (he didn’t answer the second part of the question):

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned. Read more