Toggle light / dark theme

Hundreds of millions of years of evolution have produced a variety of life-forms, each intelligent in its own fashion. Each species has evolved to develop innate skills, learning capacities, and a physical form that ensures survival in its environment.

But despite being inspired by nature and evolution, the field of artificial intelligence has largely focused on creating the elements of intelligence separately and fusing them together after the development process. While this approach has yielded great results, it has also limited the flexibility of AI agents in some of the basic skills found in even the simplest life-forms.

In a new paper published in the scientific journal Nature, AI researchers at Stanford University present a new technique that can help take steps toward overcoming some of these limits. Called “deep evolutionary reinforcement learning,” or DERL, the new technique uses a complex virtual environment and reinforcement learning to create virtual agents that can evolve both in their physical structure and learning capacities. The findings can have important implications for the future of AI and robotics research.

Elon Musk has announced the upcoming release of Tesla’s Full Self-Driving Beta 10.4 update as Tesla slows down the rollout.

Earlier this week, Tesla started rolling out Full Self-Driving Beta 10.3.

The update came after a false start last weekend when Tesla pushed the update with some problems and ended up reverting back to 10.2.

Thankfully, there is a growing effort toward AI For Good.

This latest mantra entails ways to try and make sure that the advances in AI are being applied for the overall betterment of mankind. These are assuredly laudable endeavors and reassuringly crucial that the technology underlying AI is aimed and deployed in an appropriate and assuredly positive fashion (for my coverage on the burgeoning realm of AI Ethics, see the link here).

Unfortunately, whether we like it or not, there is the ugly side of the coin too, namely the despicable AI For Bad.

The final humorous argument I have is if one example is really a robot. Aylett and Vargas describe a “robot” as a humanoid machine that doesn’t manipulate anything. It just provides information at a shopping center. How does that fit into their own definition of a robot? It sounds more like an overgrown tablet computer with wheels. However, that’s a fun argument having nothing to do with the business value of whatever you want to call it.

Full Story:


This is a review of the third book sent to me recently by MIT Press, and the book is the best of the bunch. “Living With Robots,” by Ruth Aylett and Patricia A. Vargas is a good, non-technical book that discusses a number of issues with robots in human society. This is excellent for both business managers and those more generally interested in both the promise and reality of robots in society.

One exam of the accessibility of the material is in chapter 8 where there’s a discussion on reinforcement learning. There are good theoretical examples and how reinforcement learning has risks in the real world. I really liked the part where the authors discuss blending simulation and real world testing.

Chapters on understanding location, on movement, the sense of touch, and on other issues help describe the complexity and difficulty with integrating robots into society.

Self-driving Robots, developed at MIT, set sea in Amsterdam canals.

If you don’t get seasick, an autonomous boat might be the right mode of transportation for you.

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Laboratory, together with Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute) in the Netherlands, have now created the final project in their self-navigating trilogy: a full-scale, fully autonomous robotic boat that’s ready to be deployed along the canals of Amsterdam.

Full Story:


Self-driving Roboats, developed at MIT, set sea in Amsterdam canals. If you don’t get seasick, an autonomous boat might be the right mode of transpor.

Tokyo space startup Gitai Japan successfully conducted a technology demonstration of its autonomous robotic arm inside the International Space Station last week, a key milestone as the company prepares to provide robotics as a service in space.

The S1 robotic arm performed two tasks: operating cables and switches, and assembling structures and panels. These tasks — common crew activities — can be used in a general-purpose way for a range of in-space activities. The successful demo raised what NASA calls the “technology readiness level” of the Gitai robot to TRL 7. There are nine TRLs in total and hitting all of them will be critical for Gitai commercializing its robots.

The demonstration was performed inside space company Nanoracks’ Bishop Airlock, the world’s first (and only) commercial airlock to be attached to the exterior of the station. Nanoracks — which announced plans last week to launch a fully private commercial space station with Voyage Space and Lockheed Martin — also furnished on-orbit operations, data downlink and the launch opportunity.