Toggle light / dark theme

That was a key takeaway from a conversation between economist Daniel Kahneman and MIT professor of brain and cognitive science Josh Tenenbaum at the Conference on Neural Information Processing Systems (NeurIPS) recently. The pair spoke during the virtual event about the shortcomings of humans and what we can learn from them while building A.I.

Kahneman, a Nobel Prize winner in economic sciences and the author of Thinking, Fast and Slow, noted an instance in which humans use judgment heuristics—shortcuts, essentially—to answer questions they don’t know the answer to. In the example, people are given a small amount of information about a student: She’s about to graduate, and she was reading fluently when she was 4 years old. From that, they’re asked to estimate her grade point average.

Using this information, many people will estimate the student’s GPA to be 3.7 or 3.8. To arrive there, Kahneman explained, they assign her a percentile on the intelligence scale—usually very high, given what they know about her reading ability at a young age. Then they assign her a GPA in what they estimate to be the corresponding percentile.

Most often, we recognize deep learning as the magic behind self-driving cars and facial recognition, but what about its ability to safeguard the quality of the materials that make up these advanced devices? Professor of Materials Science and Engineering Elizabeth Holm and Ph.D. student Bo Lei have adopted computer vision methods for microstructural images that not only require a fraction of the data deep learning typically relies on but can save materials researchers an abundance of time and money.

Quality control in materials processing requires the analysis and classification of complex material microstructures. For instance, the properties of some high strength steels depend on the amount of lath-type bainite in the material. However, the process of identifying bainite in microstructural images is time-consuming and expensive as researchers must first use two types of to take a closer look and then rely on their own expertise to identify bainitic regions. “It’s not like identifying a person crossing the street when you’re driving a car,” Holm explained “It’s very difficult for humans to categorize, so we will benefit a lot from integrating a .”

Their approach is very similar to that of the wider computer-vision community that drives facial recognition. The model is trained on existing material microstructure images to evaluate new images and interpret their classification. While companies like Facebook and Google train their models on millions or billions of images, materials scientists rarely have access to even ten thousand images. Therefore, it was vital that Holm and Lei use a “data-frugal method,” and train their model using only 30–50 microscopy images. “It’s like learning how to read,” Holm explained. “Once you’ve learned the alphabet you can apply that knowledge to any book. We are able to be data-frugal in part because these systems have already been trained on a large database of natural images.”

Stanford’s made a lot of progress over the years with its gecko-inspired robotic hand. In May, a version of the “gecko gripper” even found its way onto the International Space Station to test its ability to perform tasks like collecting debris and fixing satellites.

In a paper published today in Science Robotics, researchers at the university are demonstrating a far more terrestrial application for the tech: picking delicate objects. It’s something that’s long been a challenge for rigid robot hands, leading to a wide range of different solutions, including soft robotic grippers.

The team is showing off FarmHand, a four-fingered gripper inspired by both the dexterity of the human hand and the unique gripping capabilities of geckos. Of the latter, Stanford notes that the adhesive surface “creates a strong hold via microscopic flaps — Van der Waals force – a weak intermolecular force that results from subtle differences in the positions of electrons on the outsides of molecules.”

A team of researchers affiliated with multiple institutions in Korea has developed a robot hand that has abilities similar to human hands. In their paper published in the journal Nature Communications, the group describes how they achieved a high level of dexterity while keeping the hand’s size and weight low enough to attach to a robot arm.

Creating hands with the dexterity, strength and flexibility of is a challenging task for engineers—typically, some attributes are discarded to allow for others. In this new effort, the researchers developed a new robot based on a linkage-driven mechanism that allows it to articulate similarly to the human hand. They began their work by conducting a survey of existing and assessing their strengths and weaknesses. They then drew up a list of features they believed their hand should have, such as fingertip force, a high degree of controllability, low cost and high dexterity.

The researchers call their new hand an integrated, linkage-driven dexterous anthropomorphic (IDLA) robotic hand, and just like its human counterpart, it has four fingers and a thumb, each with three joints. And also like the human hand, it has fingertip sensors. The hand is also just 22 centimeters long. Overall, it has 20 joints, which gives it 15 degrees of motion—it is also strong, able to exert a crushing force of 34 Newtons—and it weighs just 1.1.kg.

Why AI will completely take over science by the end of the next decade:

‘’It can take decades for scientists to identify physical laws, statements that explain anything from how gravity affects objects to why energy can’t be created or destroyed. Purdue University researchers have found a way to use machine learning for reducing that time to just a few days.’‘.

To sign up, go to http://brilliant.org/ProRobots/ and register for free.
Also, the first 200 people who click this link will get 20% off a year’s Premium subscription.

✅ Instagram: https://www.instagram.com/pro_robots.

You are on the channel PRO Robots and in this view we present to your attention the news of high technology. Robots as people: the most realistic robot humanoid in the world, luxury patching cars of the future, xenobots — nanorobots that have learned to multiply, nanochip for reprogramming living matter, drones with legs, universal robots, robotic cleaners and other high-tech news in one video! Watch the video to the end and write in the comments, which news seemed the most interesting?

0:00 In this video.
2:25 Ameca Robot-Humanoid.
3:10 XPENG X2 flying car.
4:15 Robotic Systems Lab.
5:20 3D printed feet for drones.
6:10 Silicon nanochip for reprogramming biological tissue in living organism.
6:51 ATEA air cab.
7:23 Google unveiled its new project — Starline.
8:13 University scientists created xenobots that suddenly began to multiply.
9:06 MIRA surgical robot.
9:31 Moxi twin robots.
10:23 American startup DroneDek.
11:00 SOMATIC
11:25 Mid-range rocket — Neutron.

#prorobots #robots #robot #future technologies #robotics.

More interesting and useful content:

The HB1 has a 30m range from the ground but is potentially unlimited if the tether can be supplied from the roof. The robot can be equipped with different attachments such as a brush, robot arm, airless spray, concrete surveying equipment.

To ensure that the robot itself doesn’t fall, it had to undergo extensive electromagnetic compatibility (EMC) testing to make sure that fans, which essentially attach it to the surface, are functioning correctly.

The WMG SME team tested the robot by placing it in the EMC chamber and assessing how it responds to noise. It made sure it didn’t emit any unwanted noise into the atmosphere itself. Using amplifiers to simulate noise and analyzers, the researchers were able to detect any unwanted interference and emissions with the robot and record results.