Toggle light / dark theme

One image is all robots need to find their way

While the capabilities of robots have improved significantly over the past decades, they are not always able to reliably and safely move in unknown, dynamic and complex environments. To move in their surroundings, robots rely on algorithms that process data collected by sensors or cameras and plan future actions accordingly.

Researchers at Skolkovo Institute of Science and Technology (Skoltech) have developed SwarmDiffusion, a new lightweight Generative AI model that can predict where a robot should go and how it should move relying on a single image. SwarmDiffusion, introduced in a paper pre-published on the server arXiv, relies on a diffusion model, a technique that gradually adds noise to input data and then removes it to attain desired outputs.

“Navigation is more than ‘seeing,” a robot also needs to decide how to move, and this is where current systems still feel outdated,” Dzmitry Tsetserukou, senior author of the paper, told Tech Xplore.

Gmail’s new AI Inbox uses Gemini, but Google says it won’t train AI on user emails

Google says it’s rolling out a new feature called ‘AI Inbox,’ which summarizes all your emails, but the company promises it won’t train its models on your emails.

On Thursday, Google announced a new era of Gmail where Gemini will be taking over your default inbox screen.

Google argues that email has changed since 2004, as users are now bombarded with hundreds of emails every week, and volume keeps rising.

No AI Has Impressed Me

Stephen Wolfram, a physicist, computer scientist and founder of Wolfram Research, has been hunting for a theory of everything since his first days as a particle physicist at Caltech. Wolfram put that mission to the side to focus on his business, but the success of artificial intelligence and computational science has encouraged Wolfram to pick up the quest to understand the universe once again, with renewed vigour.


Learn more ➤https://www.newscientist.com/article–

Subscribe ➤ https://bit.ly/NSYTSUBS

Get more from New Scientist:
Official website: https://bit.ly/NSYTHP
Facebook: https://bit.ly/NSYTFB
Twitter: https://bit.ly/NSYTTW
Instagram: https://bit.ly/NSYTINSTA
LinkedIn: https://bit.ly/NSYTLIN

About New Scientist:
New Scientist was founded in 1956 for “all those interested in scientific discovery and its social consequences”. Today our website, videos, newsletters, app, podcast and print magazine cover the world’s most important, exciting and entertaining science news as well as asking the big-picture questions about life, the universe, and what it means to be human.

New Scientist.

CRISPRi screening in cultured human astrocytes uncovers distal enhancers controlling genes dysregulated in Alzheimer’s disease

2026 (Nature Neuroscience)

• AstroREG, a resource of enhancer–gene interactions in human primary astrocytes, generated by combining CRISPR inhibition (CRISPRi), single-cell RNA-seq and machine learning.


This study reveals how distal DNA ‘switches’ control gene activity in human astrocytes. Using CRISPRi screens and single-cell RNA-seq, we map enhancer–gene links, highlight Alzheimer’s disease-related targets and introduce a model that predicts additional regulatory interactions.

Nobel Prize in Physics 2024

Thanks to their work from the 1980s and onward, John Hopfield and Geoffrey Hinton have helped lay the foundation for the machine learning revolution that started around 2010.

The development we are now witnessing has been made possible through access to the vast amounts of data that can be used to train networks, and through the enormous increase in computing power. Today’s artificial neural networks are often enormous and constructed from many layers. These are called deep neural networks and the way they are trained is called deep learning.

A quick glance at Hopfield’s article on associative memory, from 1982, provides some perspective on this development. In it, he used a network with 30 nodes. If all the nodes are connected to each other, there are 435 connections. The nodes have their values, the connections have different strengths and, in total, there are fewer than 500 parameters to keep track of. He also tried a network with 100 nodes, but this was too complicated, given the computer he was using at the time. We can compare this to the large language models of today, which are built as networks that can contain more than one trillion parameters (one million millions).

Language shapes visual processing in both human brains and AI models, study finds

Neuroscientists have been trying to understand how the brain processes visual information for over a century. The development of computational models inspired by the brain’s layered organization, also known as deep neural networks (DNNs), have recently opened new exciting possibilities for research in this area.

By comparing how DNNs and the human brain process information, researchers at Peking University, Beijing Normal University and other institutes in China have shed new light on the underpinnings of visual processing. Their paper, published in Nature Human Behavior, suggests that language actively shapes how both the brain and multi-modal DNNs process visual information.

New GoBruteforcer attack wave targets crypto, blockchain projects

A new wave of GoBruteforcer botnet malware attacks is targeting databases of cryptocurrency and blockchain projects on exposed servers believed to be configured using AI-generated examples.

GoBrutforcer is also known as GoBrut. It is a Golang-based botnet that typically targets exposed FTP, MySQL, PostgreSQL, and phpMyAdmin services.

The malware often relies on compromised Linux servers to scan random public IPs and carry out brute-force login attacks.

/* */