Toggle light / dark theme

Playing piano with a mind-controlled robotic arm

The arm/hand probably intended for the ATLAS robot. I’d be curious if they are already playing with attaching it on to the robot.


The first person to live with a mind-controlled robotic arm is teaching himself piano. Johnny Matheny has spent the last five months with an advanced prosthetic, designed to replace the human hand and arm.

The robot arm is part of a research project run through the Johns Hopkins University Applied Physics Lab, and funded, in part, by the US Department of Defense. Data that researchers collect could revolutionize future mind-controlled robotics.

This is the second video in a series following Johnny has he spends the year with the arm.

Watch the arm being delivered: https://youtu.be/xKUn0-Bhb7U

‘Til Deletion Do Us Part’: Discovering Love in a Virtual Future

What does it mean to fall in love in the 21st century? Originally, the number of people you could fall in love with were limited to the amount that lived within relative close proximity of you (a few miles, at best). In today’s world, however, it isn’t that uncommon for people to fall in love online.


As we move forward into a future of VR and AI, how might our abilities to fall in love change in a world where non-biological life is teeming just as much as biological life?

What Is Cognitive Computing (How AI Will Think)

Recommended Books ➤

📖 Life 3.0 — https://amzn.to/2KZdRU0
📖 The Master Algorithm — https://amzn.to/2jV1egi
📖 Superintelligence — https://amzn.to/2rCXzqQ

This video is the eleventh in a multi-part series discussing computing. In this video, we’ll be discussing what cognitive computing is and the impact it will have on the field of computing.

[0:28–5:09] Starting off we’ll discuss, what cognitive computing is, more specifically – the difference between current computing Von Neuman architecture and more biologically representative neuromorphic architecture and how these two paired together will yield massive performance and efficiency gains!

[5:09–10:46] Following that we’ll discuss, the benefits of cognitive computing systems further as well as current cognitive computing initiatives, TrueNorth and Loihi.

[10:46–17:11] To conclude we’ll extrapolate and discuss the future of cognitive computing in terms of brain simulation, artificial intelligence and brain-computer interfaces!

Life lessons from artificial intelligence: What Microsoft’s AI chief wants computer science grads to know about the future

Artificial intelligence has exploded, and perhaps no one knows it more than Harry Shum, the executive vice president in charge of Microsoft’s AI and Research Group, which has been at the center of a major technological shift inside the company.

Delivering the commencement address Friday at the University of Washington’s Paul G. Allen School of Computer Science and Engineering, Shum drew inspiration from three emerging technologies — quantum computing, AI, and mixed reality — to deliver life lessons and point out the future of technology for the class of 2018.

IBM and the Department of Energy show off the world’s fastest supercomputer, Summit

IBM and the Department of Energy’s Oak Ridge National Laboratory have revealed the world’s “most powerful and smartest scientific supercomputer.” Known as Summit, IBM says that its new computer will be capable of processing 200,000 quadrillion calculations per second. To put that into perspective, if every person on Earth did a single calculation per second, it would take 305 days to do what Summit does in a single second. Assuming those numbers are accurate, that would make Summit the world’s fastest supercomputer. It would also mark the first time since 2012 that a U.S. computer held that title.

Summit has been in the works for several years now and features some truly impressive specs. According to Tech Crunch, the computer will feature 4,608 compute servers, 22 IBM Power9 chips and six Nvidia Tesla V100 GPUs each. In addition, the machine will feature more than 10 petabytes of memory. As the Nvidia GPUs attest, this machine will be primarily used for the development of artificial intelligence and machine learning. In addition to the work on A.I., Summit will also be used for research into energy and other scientific endeavors at Oak Ridge.

IBM was the Department of Energy’s general contractor for the Summit project, but it also had the help of several other partners within the tech industry. The GPUs were provided by Nvidia, which remains one of the leaders in cutting-edge GPU development. Mellanox and Redhat were also brought on to work on the development of Summit.

MIT fed an AI data from Reddit, and now it thinks of nothing but murder

The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn’t speculate about whether exposure to graphic content changes the way a human thinks. They’ve done other experiments in the same vein, too, using AI to write horror stories, create terrifying images, judge moral decisions, and even induce empathy. This kind of research is important. We should be asking the same questions of artificial intelligence as we do of any other technology because it is far too easy for unintended consequences to hurt the people the system wasn’t designed to see. Naturally, this is the basis of sci-fi: imagining possible futures and showing what could lead us there. Issac Asimov gave wrote the “Three Laws of Robotics” because he wanted to imagine what might happen if they were contravened.

Even though artificial intelligence isn’t a new field, we’re a long, long way from producing something that, as Gideon Lewis-Kraus wrote in The New York Times Magazine, can “demonstrate a facility with the implicit, the interpretive.” But it still hasn’t undergone the kind of reckoning that causes a discipline to grow up. Physics, you recall, gave us the atom bomb, and every person who becomes a physicist knows they might be called on to help create something that could fundamentally alter the world. Computer scientists are beginning to realize this, too. At Google this year, 5,000 employees protested and a host of employees resigned from the company because of its involvement with Project Maven, a Pentagon initiative that uses machine learning to improve the accuracy of drone strikes.

Norman is just a thought experiment, but the questions it raises about machine learning algorithms making judgments and decisions based on biased data are urgent and necessary. Those systems, for example, are already used in credit underwriting, deciding whether or not loans are worth guaranteeing. What if an algorithm decides you shouldn’t buy a house or a car? To whom do you appeal? What if you’re not white and a piece of software predicts you’ll commit a crime because of that? There are many, many open questions. Norman’s role is to help us figure out their answers.

/* */