Toggle light / dark theme

Quantum reservoir computing peaks at the edge of many-body chaos, study suggests

Reservoir computing is a promising machine learning-based approach for the analysis of data that changes over time, such as weather patterns, recorded speech or stock market trends. Classical reservoir computing techniques are known to perform best at the “edge of chaos,” or in simpler terms, at a “sweet spot” in which the behavior of systems is neither entirely predictable (i.e., order) nor completely unpredictable (i.e., chaos).

In recent years, some physicists and quantum engineers have been exploring the possibility of realizing a quantum equivalent of classical reservoir computing, known as quantum reservoir computing (QRC). These approaches enable the processing of temporal data and the prediction of events unfolding over time, leveraging high-dimensional quantum states.

Researchers at the University of Tokyo carried out a study investigating how QRC would behave when applied to complex quantum many-body systems, which consist of several interacting quantum particles. Their paper, published in Physical Review Letters, introduces a physics-based framework that could inform the future development of QRC systems.

Nanodevice produces continuous electricity from evaporation

A nanodevice developed at EPFL produces an autonomous, stable current from evaporating saltwater by using heat and light to control the movement of ions and electrons. Previously, researchers in the Laboratory of Nanoscience for Energy Technology (LNET) in EPFL’s School of Engineering reported a platform for studying the hydrovoltaic (HV) effect—a phenomenon that allows electricity to be harvested when fluid is passed over the charged surface of a nanodevice. Their platform consisted of a hexagonal network of silicon nanopillars, the space between which created channels for evaporating fluid samples.

Now the LNET team, led by Giulia Tagliabue, has developed this platform into a hydrovoltaic system with a power output that matches or exceeds similar technologies—with a major advantage. Instead of relying on heat and light to simply boost evaporation, the EPFL system generates current by harnessing heat and light to control the movement of ions in evaporating saltwater, and the flow of electrons in the silicon nanodevice.

“Heat and light imbalances will always affect a hydrovoltaic device, but we have discovered how these can be leveraged to our advantage,” explains LNET researcher Tarique Anwar.

These Billionaires Plan To Bring Self-Driving Tech To Everything That Moves

Applied Intuition’s cofounders are building software that can drive everything from planes to tanks to automobiles. But to expand beyond its $800 million business selling tech for cars, they will have to take on Tesla, Google, Nvidia and a host of other startups jostling for pole position in the autonomy race.

‘Learn-to-Steer’ method improves AI’s ability to understand spatial instructions

Researchers from the Department of Computer Science at Bar-Ilan University and from NVIDIA’s AI research center in Israel have developed a new method that significantly improves how artificial intelligence models understand spatial instructions when generating images—without retraining or modifying the models themselves. Image-generation systems often struggle with simple prompts such as “a cat under the table” or “a chair to the right of the table,” frequently placing objects incorrectly or ignoring spatial relationships altogether. The Bar-Ilan research team has introduced a creative solution that allows AI models to follow such instructions more accurately in real time.

The new method, called Learn-to-Steer, works by analyzing the internal attention patterns of an image-generation model, effectively offering insight into how the model organizes objects in space. A lightweight classifier then subtly guides the model’s internal processes during image creation, helping it place objects more precisely according to user instructions. The approach can be applied to any existing trained model, eliminating the need for costly retraining.

The results show substantial performance gains. In the Stable Diffusion SD2.1 model, accuracy in understanding spatial relationships increased from 7% to 54%. In the Flux.1 model, success rates improved from 20% to 61%, with no negative impact on the models’ overall capabilities.

In defense of artificial suffering

Perhaps our last line of defense.


Philosophical Studies — The ability to suffer, in the case of artificial entities, is often viewed as a moral turning point—once detected, there is no going back, and the moral landscape is irreversibly altered. The presence of entities capable of suffering imposes moral and legal obligations on humans. It is therefore unsurprising that many have urged caution in pursuing artificial suffering, with some even proposing a moratorium. In this paper, however, I argue that the emergence of artificial suffering need not entail moral disaster. On the contrary, I defend its development and contend that it may be a necessary feature of superintelligent robots. I suggest that artificial suffering could be essential for enabling human-like ethics in machines, bridging the retribution gap, and functioning as a control mechanism to mitigate existential risks. Rather than constraining research in this area, I maintain that work on artificial suffering should be actively intensified.

A Layered Self-Supervised Knowledge Distillation Framework for Efficient Multimodal Learning on the Edge

We introduce Layered Self-Supervised Knowledge Distillation (LSSKD) framework for training compact deep learning models. Unlike traditional methods that rely on pre-trained teacher networks, our approach appends auxiliary classifiers to intermediate feature maps, generating diverse self-supervised knowledge and enabling one-to-one transfer across different network stages. Our method achieves an average improvement of 4.54\% over the state-of-the-art PS-KD method and a 1.14% gain over SSKD on CIFAR-100, with a 0.32% improvement on ImageNet compared to HASSKD. Experiments on Tiny ImageNet and CIFAR-100 under few-shot learning scenarios also achieve state-of-the-art results. These findings demonstrate the effectiveness of our approach in enhancing model generalization and performance without the need for large over-parameterized teacher networks. Importantly, at the inference stage, all auxiliary classifiers can be removed, yielding no extra computational cost. This makes our model suitable for deploying small language models on affordable low-computing devices. Owing to its lightweight design and adaptability, our framework is particularly suitable for multimodal sensing and cyber-physical environments that require efficient and responsive inference. LSSKD facilitates the development of intelligent agents capable of learning from limited sensory data under weak supervision.

/* */