Toggle light / dark theme

Infant-inspired framework helps robots learn to interact with objects

Over the past decades, roboticists have introduced a wide range of advanced systems that can move around in their surroundings and complete various tasks. Most of these robots can effectively collect images and other data in their surroundings, using computer vision algorithms to interpret it and plan their future actions.

In addition, many robots leverage large language models (LLMs) or other natural language processing (NLP) models to interpret instructions, make sense of what users are saying and answer them in specific languages. Despite their ability to both make sense of their surroundings and communicate with users, most robotic systems still struggle when tackling tasks that require them to touch, grasp and manipulate objects, or come in physical contact with people.

Researchers at Tongji University and State Key Laboratory of Intelligent Autonomous Systems recently developed a new framework designed to improve the process via which robots learn to physically interact with their surroundings.

Windows PowerShell now warns when running Invoke-WebRequest scripts

Microsoft says Windows PowerShell now warns when running scripts that use the Invoke-WebRequest cmdlet to download web content, aiming to prevent potentially risky code from executing.

As Microsoft explains, this mitigates a high-severity PowerShell remote code execution vulnerability (CVE-2025–54100), which primarily affects enterprise or IT-managed environments that use PowerShell scripts for automation, since PowerShell scripts are not as commonly used outside such environments.

The warning has been added to Windows PowerShell 5.1, the PowerShell version installed by default on Windows 10 and Windows 11 systems, and is designed to add the same secure web parsing process available in PowerShell 7.

Nvidia can sell the more advanced H200 AI chip to China — but will Beijing want them?

Nvidia has approval from the U.S. government to sell its more advanced H200 AI chips to China. But the question is whether Beijing wants it or will let companies buy it.

The company can now ship its H200 chip to “approved customers”, provided the U.S. government gets a 25% cut of those sales. It had been effectively banned from selling any semiconductors to China earlier this year, but since July sought to resume H20 sales, a less advanced chip designed specifically to comply with export restrictions.

Reports had suggested Beijing prohibited local companies from buying the H20. Nvidia is not baking in huge China sales into its forecasts as a result. After the ban was lifted, the Financial Times reported China would “limit access” to the H200, citing unidentified sources.

Google CEO Sundar Pichai hints at building data centres in space; Elon Musk replies

The TOI Tech Desk is a dedicated team of journalists committed to delivering the latest and most relevant news from the world of technology to readers of The Times of India. TOI Tech Desk’s news coverage spans a wide spectrum across gadget launches, gadget reviews, trends, in-depth analysis, exclusive reports and breaking stories that impact technology and the digital universe. Be it how-tos or the latest happenings in AI, cybersecurity, personal gadgets, platforms like WhatsApp, Instagram, Facebook and more; TOI Tech Desk brings the news with accuracy and authenticity.

Digital twins for personalized treatment in uro-oncology in the era of artificial intelligence

This Review focuses on the clinical effects and translational potential of digital twin applications in uro-oncology, highlights challenges and discusses future directions for implementing digital twins to achieve personalized uro-oncological diagnostics and treatment.

The Next Giant Leap For AI Is Called World Models

Only world models respond to the user’s input as they navigate around the world by moving the camera, or interacting with people and objects it contains, rather than just interpreting prompts to decide what video should be generated.

Using this method, the entire world is continuously generated, frame-by-frame, based on the model’s internal understanding of how the environment and objects should behave.

This method allows the creation of highly flexible, realistic and unique environments. Imagine a video game world, for example, where literally anything can happen. The possibilities aren’t limited to situations and choices that have been written into the code by a game programmer, because the model generates visuals and sounds to match any choice the player makes.

Unified EEG imaging improves mapping for epilepsy surgery

A new advance from Carnegie Mellon University researchers could reshape how clinicians identify the brain regions responsible for drug-resistant epilepsy. Surgery can be a life-changing option for millions of epilepsy patients worldwide, but only if physicians can accurately locate the epileptogenic zone, the area where seizures originate.

Bin He, professor of biomedical engineering, and his team have developed a unified, machine learning-based approach called spatial-temporal-spectral imaging (STSI) to assist. It is the first technology capable of analyzing every major type of epileptic brain signal within a single computational framework.

Their work, published in PNAS, presents a technical breakthrough and promising new direction for noninvasive presurgical planning.

/* */