Toggle light / dark theme

Deep learning model advances how robots can independently grasp objects

Robots are unable to perform everyday manipulation tasks, such as grasping or rearranging objects, with the same dexterity as humans. But Brazilian scientists have moved this research a step further by developing a new system that uses deep learning algorithms to improve a robot’s ability to independently detect how to grasp an object, known as autonomous robotic grasp detection.

In a paper published Feb. 24 in Robotics and Autonomous Systems, a team of engineers from the University of São Paulo addressed existing problems with the visual perception phase that occurs when a robot grasps an object. They created a model using deep learning neural networks that decreased the time a robot needs to process visual data, perceive an object’s location and successfully grasp it.

Deep learning is a subset of machine learning, in which computer algorithms are trained how to learn with data and to improve automatically through experience. Inspired by the structure and function of the human brain, deep learning uses a multilayered structure of algorithms called neural networks, operating much like the human brain in identifying patterns and classifying different types of information. Deep learning models are often based on convolutional neural networks, which specialize in analyzing visual imagery.

AI Meets Chipmaking: Applied Materials Incorporates AI In Wafer Inspection Process

Advanced system-on-chip designs are extremely complex in terms of transistor count and are hard to build using the latest fabrication processes. In a bid to make production of next-generation chips economically feasible, chip fabs need to ensure high yields early in their lifecycle by quickly finding and correcting defects.

But finding and fixing defects is not easy today, as traditional optical inspection tools don’t offer sufficiently detailed image resolution, while high-resolution e-beam and multibeam inspection tools are relatively slow. Looking to bridge the gap on inspection costs and time, Applied Materials has been developing a technology called ExtractAI technology, which uses a combination of the company’s latest Enlight optical inspection tool, SEMVision G7 e-beam review system, and deep learning (AI) to quickly find flaws. And surprisingly, this solution has been in use for about a year now.

“Applied’s new playbook for process control combines Big Data and AI to deliver an intelligent and adaptive solution that accelerates our customers’ time to maximum yield,” said Keith Wells, group vice president and general manager, Imaging and Process Control at Applied Materials. “By combining our best-in-class optical inspection and eBeam review technologies, we have created the industry’s only solution with the intelligence to not only detect and classify yield-critical defects but also learn and adapt to process changes in real-time. This unique capability enables chipmakers to ramp new process nodes faster and maintain high capture rates of yield-critical defects over the lifetime of the process.”

Deep science: AI is in the air, water, soil and steel

Research papers come out far too rapidly for anyone to read them all, especially in the field of machine learning, which now affects (and produces papers in) practically every industry and company. This column aims to collect some of the most relevant recent discoveries and papers — particularly in but not limited to artificial intelligence — and explain why they matter.

This week brings a few unusual applications of or developments in machine learning, as well as a particularly unusual rejection of the method for pandemic-related analysis.

One hardly expects to find machine learning in the domain of government regulation, if only because one assumes federal regulators are hopelessly behind the times when it comes to this sort of thing. So it may surprise you that the U.S. Environmental Protection Agency has partnered with researchers at Stanford to algorithmically root out violators of environmental rules.

Thought-detection: AI has infiltrated our last bastion of privacy

“Ahsan Noor Khan, a PhD student and first author of the study, said: “We’re now looking to investigate how we could use low-cost existing systems, such as Wi-Fi routers, to detect emotions of a large number of people gathered, for instance in an office or work environment.” Among other things, this could be useful for HR departments to assess how new policies introduced in a meeting are being received, regardless of what the recipients might say. Outside of an office, police could use this technology to look for emotional changes in a crowd that might lead to violence.”


Research from the UK and an update from Elon Musk on human trials at his brain interface company show software is now eating the mind.

First collaborative robot to work with vehicles in motion

The Ph.D. thesis by Daniel Teso-Fernández de Betoño of the UPV/EHU Faculty of Engineering in Vitoria-Gasteiz has resulted in a mobile, collaborative platform capable of performing tasks in motion at the Mercedes-Benz plant in the capital of Alava. The research opens up a new field for improving the ergonomics of these workstations and for the robot and human to collaborate by performing tasks together.

The idea of collaborative robotics with autonomous navigation to perform screwdriving tasks in motion emerged at the Mercedes-Benz plant in Vitoria-Gasteiz. To develop his Ph.D. thesis, Daniel Teso-Fernández de Betoño sought to investigate, develop and implement an adequate, efficient technology within the lines of work, and which would cooperate with the workers.

On the Mercedes-Benz final assembly lines, the vast majority of tasks require manual operations. It is also an area where everything is in motion, which means that not all types of people can opt to work in these spaces.

Solving ‘barren plateaus’ is the key to quantum machine learning

Many machine learning algorithms on quantum computers suffer from the dreaded “barren plateau” of unsolvability, where they run into dead ends on optimization problems. This challenge had been relatively unstudied—until now. Rigorous theoretical work has established theorems that guarantee whether a given machine learning algorithm will work as it scales up on larger computers.

“The work solves a key problem of useability for . We rigorously proved the conditions under which certain architectures of variational quantum algorithms will or will not have barren plateaus as they are scaled up,” said Marco Cerezo, lead author on the paper published in Nature Communications today by a Los Alamos National Laboratory team. Cerezo is a post doc researching at Los Alamos. “With our theorems, you can guarantee that the architecture will be scalable to quantum computers with a large number of qubits.”

“Usually the approach has been to run an optimization and see if it works, and that was leading to fatigue among researchers in the field,” said Patrick Coles, a coauthor of the study. Establishing mathematical theorems and deriving first principles takes the guesswork out of developing algorithms.

/* */