Toggle light / dark theme

From robots that flip burgers in California to ones that serve up bratwursts in Berlin, we are starting to see how machines can play sous-chef in kitchens around the world. But scientists at the University of Cambridge have been exploring how these culinary robots might not only do some of the heavy lifting but actually elevate the dining experience for the humans they serve, demonstrating some early success in a robot trained to cook omelettes.

The research project is a collaboration between the University of Cambridge researchers and domestic appliance company Beko, with the scientists setting out to take robotic cooking into new territory. Where robot chefs have been developed to prepare pizzas, pancakes and other items, the team was interested in how it might be possible to optimize the robot’s approach and produce a tastier meal based on human feedback.

“Cooking is a really interesting problem for roboticists, as humans can never be totally objective when it comes to food, so how do we as scientists assess whether the robot has done a good job?” says Dr Fumiya Iida from Cambridge’s Department of Engineering, who led the research.

#Tesla #AI


Featured image: Tesla

Tesla has managed to attract the best artificial intelligence specialists to its Autopilot team who are committed to developing software that makes full self-driving possible. The company recently published two patents that relate to improvements in this area.

Tesla Filed Patent ‘Enhanced object detection for autonomous vehicles based on field view’ https://www.tesmanian.com/blogs/tesmanian-blog/patent-enhanc…um=twitter pic.twitter.com/IU6tdaOlH7 — Tesmanian.com (@Tesmanian_com) June 5, 2020

Pleased to have been the guest on this most recent episode of Javier Ideami’s Beyond podcast. We discuss everything from #spaceexploration to #astrobiology!


In this episode, we travel from Ferdinand Magellan’s voyage to the first mission to Mars with Bruce Dorminey. Bruce is a science journalist and author who primarily covers aerospace, astronomy and astrophysics. He is a regular contributor to Astronomy magazine and since 2012, he has written a regular tech column for Forbes magazine. He is also a correspondent for Renewable Energy World. Writer of “Distant Wanderers: The Search for Planets Beyond the Solar System”, he was a 1998 winner in the Royal Aeronautical Society’s Aerospace Journalist of the Year Awards (AJOYA) as well as a founding team member of the NASA Astrobiology Institute’s Science Communication Focus Group.

EPISODE LINKS:

The COVID-19 pandemic will have a profound impact on robotics, as more companies look to automation as a way forward. While wide-scale automation had long seemed like an inevitability, the pandemic is set to accelerate the push as corporations look for processes that remove the human element from the equation.

Of course, Locus Robotics hasn’t had too much of an issue raising money previously. The Massachusetts-based startup, which raised $26 million back in April of last year, is adding a $40 million Series D to its funds. That brings the full amount to north of $105 million. This latest round, led by Zebra Technologies, comes as the company looks to expand operations with the launch of a European HQ.

“The new funding allows Locus to accelerate expansion into global markets,” CEO Rick Faulk said in a release, “enabling us to strengthen our support of retail, industrial, healthcare, and 3PL businesses around the world as they navigate through the COVID-19 pandemic, ensuring that they come out stronger on the other side.”

Over the last few years, the size of deep learning models has increased at an exponential pace (famously among language models):

And in fact, this chart is out of date. As of this month, OpenAI has announced GPT-3, which is a 175 billion parameter model—or roughly ten times the height of this chart.

As models grow larger, they introduce new infrastructure challenges. For my colleagues and I building Cortex (open source model serving infrastructure), these challenges are front and center, especially as the number of users deploying large models to production increases.

When Plato set out to define what made a human a human, he settled on two primary characteristics: We do not have feathers, and we are bipedal (walking upright on two legs). Plato’s characterization may not encompass all of what identifies a human, but his reduction of an object to its fundamental characteristics provides an example of a technique known as principal component analysis.

Now, Caltech researchers have combined tools from machine learning and neuroscience to discover that the brain uses a mathematical system to organize visual objects according to their principal components. The work shows that the brain contains a two-dimensional map of cells representing different objects. The location of each cell in this map is determined by the principal components (or features) of its preferred objects; for example, cells that respond to round, curvy objects like faces and apples are grouped together, while cells that respond to spiky objects like helicopters or chairs form another group.

The research was conducted in the laboratory of Doris Tsao (BS ‘96), professor of biology, director of the Tianqiao and Chrissy Chen Center for Systems Neuroscience and holder of its leadership chair, and Howard Hughes Medical Institute Investigator. A paper describing the study appears in the journal Nature on June 3.

Learning quantum error correction: the image visualizes the activity of artificial neurons in the Erlangen researchers’ neural network while it is solving its task. © Max Planck Institute for the Science of Light.

Neural networks enable learning of error correction strategies for computers based on quantum physics

Quantum computers could solve complex tasks that are beyond the capabilities of conventional computers. However, the quantum states are extremely sensitive to constant interference from their environment. The plan is to combat this using active protection based on quantum error correction. Florian Marquardt, Director at the Max Planck Institute for the Science of Light, and his team have now presented a quantum error correction system that is capable of learning thanks to artificial intelligence.