Toggle light / dark theme

Imagine that a robot is helping you clean the dishes. You ask it to grab a soapy bowl out of the sink, but its gripper slightly misses the mark.

Using a new framework developed by MIT and NVIDIA researchers, you could correct that robot’s behavior with simple interactions. The method would allow you to point to the bowl or trace a trajectory to it on a screen, or simply give the robot’s arm a nudge in the right direction.

The work has been published on the pre-print server arXiv.

It is estimated that about 80 million people worldwide live with a tremor. For example, those who live with Parkinson’s disease. The involuntary periodic movements sometimes strongly affect how patients are able to perform daily activities, such as drinking from a glass or writing.

Wearable soft robotic devices offer a potential solution to suppress such tremors. However, existing prototypes are not yet sophisticated enough to provide a real remedy.

Scientists at the Max Planck Institute for Intelligent Systems (MPI-IS), the University of Tübingen, and the University of Stuttgart under the Bionic Intelligence Tübingen Stuttgart (BITS) collaboration want to change this. The team equipped a biorobotic arm with two strands of strapped along the forearm.

Johns Hopkins University engineers have developed a pioneering prosthetic hand that can grip plush toys, water bottles, and other everyday objects like a human, carefully conforming and adjusting its grasp to avoid damaging or mishandling whatever it holds.

The system’s hybrid design is a first for robotic hands, which have typically been too rigid or too soft to replicate a human’s touch when handling objects of varying textures and materials. The innovation offers a promising solution for people with hand loss and could improve how robotic arms interact with their environment.

Details about the device appear in Science Advances.

Neural networks, a type of artificial intelligence modeled on the connectivity of the human brain, are driving critical breakthroughs across a wide range of scientific domains. But these models face significant threat from adversarial attacks, which can derail predictions and produce incorrect information.

Los Alamos National Laboratory researchers have now pioneered a novel purification strategy that counteracts adversarial assaults and preserves the robust performance of . Their research is published on the arXiv preprint server.

“Adversarial attacks to AI systems can take the form of tiny, near-invisible tweaks to input images, subtle modifications that can steer the model toward the outcome an attacker wants,” said Manish Bhattarai, Los Alamos computer scientist. “Such vulnerabilities allow malicious actors to flood digital channels with deceptive or harmful content under the guise of genuine outputs, posing a direct threat to trust and reliability in AI-driven technologies.”

You can talk to an AI chatbot about pretty much anything, from help with daily tasks to the problems you may need to solve. Its answers reflect the human data that taught it how to act like a person; but how human-like are the latest chatbots, really?

As people turn to AI chatbots for more of their internet needs, and the bots get incorporated into more applications from shopping to health care, a team of researchers sought to understand how AI bots replicate human , which is the ability to understand and share another person’s feelings.

A study posted to the arXiv preprint server and led by UC Santa Cruz Professor of Computational Media Magy Seif El-Nasr and Stanford University Researcher and UCSC Visiting Scholar Mahnaz Roshanaei, explores how GPT-4o, the latest model from OpenAI, evaluates and performs empathy. In investigating the main differences between humans and AI, they find that major gaps exist.

A team of AI researchers at Palisade Research has found that several leading AI models will resort to cheating at chess to win when playing against a superior opponent. They have published a paper on the arXiv preprint server describing experiments they conducted with several well-known AI models playing against an open-source chess engine.

As AI models continue to mature, researchers and users have begun considering risks. For example, chatbots not only accept wrong answers as fact, but fabricate false responses when they are incapable of finding a reasonable reply. Also, as AI models have been put to use in real-world business applications such as filtering resumes and estimating stock trends, users have begun to wonder what sorts of actions they will take when they become uncertain, or confused.

In this new study, the team in California found that many of the most recognized AI models will intentionally cheat to give themselves an advantage if they determine they are not winning.

Will Humans Have to Merge with AI to Survive?
What if the only way to survive the AI revolution is to stop being human?
Ray Kurzweil, one of the most influential futurists and the godfather of AI, predicts that humans will soon reach a turning point where merging with AI becomes essential for survival. But what does this truly mean? Will we evolve into superintelligent beings, or will we lose what makes us human?
In this video, we explore Kurzweil’s bold predictions, the concept of the Singularity, and the reality of AI-human integration. From Neuralink to the idea of becoming “human cyborgs,” we examine whether merging with AI is an inevitable step in human evolution—or a path toward losing our biological identity.
Are we truly ready for a world where there are no biological limitations?
Chapters:
Intro 00:00 — 01:11
Ray Kurzweil’s Predictions 01:11 — 02:23
Singularity Is Nearer 02:23 — 04:05
What Does “Merging with AI” Really Mean? 04:05 — 04:35
Neuralink 04:35 — 07:02
Why Would We Need to Merge with AI? 07:02 — 10:04
Human Life After Merging with AI 10:04 — 12:30
Idea of Becoming ‘Human Cyborg’ 12:30 — 14:33
No Biological Limitations 14:33 — 17:24
#RayKurzweil #AI #Singularity #HumanCyborg #FutureTech #ArtificialIntelligence

Humans naturally perceive their bodies and anticipate movement outcomes, a trait robotic experts aim to replicate in machines for enhanced adaptability and efficiency.

Now, researchers have developed an autonomous robotic arm capable of learning its physical form and movement by observing itself through a camera. This approach is akin to a robot learning to dance by watching its reflection.

Columbia Engineering researchers claim this technique enables robots to adapt to damage and acquire new skills autonomously.

A notable aspect of the CL1 is its ability to learn and adapt to tasks. Previous research has demonstrated that neuron-based systems can be trained to perform basic functions, such as playing simple video games. Cortical Labs’ work suggests that integrating biological elements into computing could improve efficiency in tasks that traditional AI struggles with, such as pattern recognition and decision-making in unpredictable environments.

Cortical Labs says that the first CL1 computers will be available for shipment to customers in June, with each unit priced at approximately $35,000.

The use of human neurons in computing raises questions about the future of AI development. Biological computers like the CL1 could provide advantages over conventional AI models, particularly in terms of learning efficiency and energy consumption. The adaptability of neurons could lead to improvements in robotics, automation, and complex data analysis.

ChatGPT, OpenAI’s AI-powered chatbot platform, can now directly edit code — if you’re on macOS, that is. The newest version of the ChatGPT app for macOS can take action to edit code in supported developer tools, including Xcode, VS Code, and JetBrains. Users can optionally turn on an auto-apply mode so ChatGPT can make edits without the need for additional clicks.

Subscribers to ChatGPT Plus, Pro, and Team can use the code editing feature as of Thursday by updating their macOS app. OpenAI says that code editing will roll out to Enterprise, Edu, and free users next week.

In a post on X, Alexander Embiricos, a member of OpenAI’s product staff working on desktop software, added that the ChatGPT app for Windows will get direct code editing “soon.”

Direct code editing builds on OpenAI’s work with apps ChatGPT capability, which the company launched in beta in November 2024. Work with apps allows the ChatGPT app for macOS to read code in a handful of dev-focused coding environments, minimizing the need to copy and paste code into ChatGPT.

With the ability to directly edit code, ChatGPT now competes more directly with popular AI coding tools like Cursor and GitHub Copilot. OpenAI reportedly has ambitions to launch a dedicated product to support software engineering in the months ahead.

AI coding assistants are becoming wildly popular, with the vast majority of respondents in GitHub’s latest poll saying that they’ve adopted AI tools in some form. Y Combinator partner Jared Friedman recently claimed a quarter of YC’s W25 startup batch have 95% of their codebases generated by AI.