Toggle light / dark theme

The nascent autonomous-vehicle industry is being reshaped by consolidation. Amazon, which committed to buying 100,000 Rivian electric vehicles, announced today that it is buying Zoox, the self-driving car tech start-up, for $1 billion. Ford and Volkswagen made multi-billion dollar investments in Argo. General Motors purchased Cruise Automation in 2016, while Hyundai is working with tier-one supplier Aptiv to deploy a robotaxi service in multiple global markets.

The tie-up between Waymo and Volvo (with its three brands all aggressive pursuing electric vehicles) could reshape the competitive landscape, although it’s too early to tell.

Google started its self-driving program more than a decade ago but paused the development of its own vehicle in 2016. A tight partnership between Waymo and Volvo to develop ground-up cars, if that’s what materializes, could put those plans back on track – this time with an established auto manufacturer known for high-quality production and safety.

Smart phone apps provide nearly instantaneous navigation on Earth; the Deep Space Atomic Clock could do the same for future robotic and human explorers.

As the time when NASA will begin sending humans back to the Moon draws closer, crewed trips to Mars are an enticing next step. But future space explorers will need new tools when traveling to such distant destinations. The Deep Space Atomic Clock mission is testing a new navigation technology that could be used by both human and robotic explorers making their way around the Red Planet and other deep space destinations.

In less than a year of operations, the mission has passed its primary goal to become one of the most stable clocks to ever fly in space; it is now at least 10 times more stable than atomic clocks flown on GPS satellites. In order to keep testing the system, NASA has extended the mission through August 2021. The team will use the additional mission time to continue to improve the clock’s stability, with a goal of becoming 50 times more stable than GPS atomic clocks.

The audio of the fascinating talks & panel at the Future Day Melbourne 2020 / Machine Understanding event:

Kevin Korb — https://archive.org/searchresults.php Wilkins — https://archive.org/details/john-wilkins-humans-as-machines (John, sorry about the audio — also do you have the slides for this?) Hugo de Garis — https://archive.org/details/hugo-de-garis-future-day-2020 Panel — https://archive.org/…/future-day-panel-kevin-korb-hugo-de-g…

The video will be uploaded at a later date.


There is much public concern nowadays about when an AGI (Artificial General Intelligence) might appear and what it might go and do. The expert community is less concerned, because they know we’re a long ways off yet. More fundamentally though: we’re a long ways off of API (Artificial Primitive Intelligence). In fact, we have no idea what an API might even look like. AI took off without ever reflecting seriously on what I, either NI or AI, really is. So, it’s been streaking along in myriad directions without any goal in sight.

Are you for Ethical Ai Eric Klien?


Jess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency.

For Whittlestone, this means anticipating problems before they happen, finding better ways to build safety and reliability into AI systems, and emphasizing technical expertise at all levels of the technology’s development and use. At the core of these recommendations is the idea that ethics needs to become simply a part of how AI is made and used, rather than an add-on or afterthought.

Ultimately, AI will be quicker to deploy when needed if it is made with ethics built in, she argues. I asked her to talk me through what this means.

Tesla’s Full Self-Driving suite continues to improve with a recent video showing a Model 3 safely shifting away from a makeshift lane of construction cones while using Navigate on Autopilot.

Tesla owner-enthusiast Jeremy Greenlee was traveling through a highway construction zone in his Model 3. The zone contained a makeshift lane to the vehicle’s left that was made up of construction cones.

In an attempt to avoid the possibility of any collision with the cones from taking place, the vehicle utilized the driver-assist system and automatically shifted one lane to the right. This maneuver successfully removed any risk of coming into contact with the dense construction cones that were to the left of the car, which could have caused hundreds of dollars in cosmetic damage to the vehicle.

A collective of more than 1,000 researchers, academics and experts in artificial intelligence are speaking out against soon-to-be-published research that claims to use neural networks to “predict criminality.” At the time of writing, more than 50 employees working on AI at companies like Facebook, Google and Microsoft had signed on to an open letter opposing the research and imploring its publisher to reconsider.

The controversial research is set to be highlighted in an upcoming book series by Springer, the publisher of Nature. Its authors make the alarming claim that their automated facial recognition software can predict if a person will become a criminal, citing the utility of such work in law enforcement applications for predictive policing.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” Harrisburg University professor and co-author Nathaniel J.S. Ashby said.