Toggle light / dark theme

A Wearable Robot That Learns

Having lived with an ALS diagnosis since 2018, Kate Nycz can tell you firsthand what it’s like to slowly lose motor function for basic tasks. “My arm can get to maybe 90 degrees, but then it fatigues and falls,” the 39-year-old said. “To eat or do a repetitive motion with my right hand, which was my dominant hand, is difficult. I’ve mainly become left-handed.”

People like Nycz who live with a neurodegenerative disease like ALS or who have had a stroke often suffer from impaired movement of the shoulder, arm or hands, preventing them from daily tasks like tooth-brushing, hair-combing or eating.

For the last several years, Harvard bioengineers have been developing a soft, wearable robot that not only provides movement assistance for such individuals but could even augment therapies to help them regain mobility.

But no two people move exactly the same way. Physical motions are highly individualized, especially for the mobility-impaired, making it difficult to design a device that works for many different people.

It turns out advances in machine learning can create a more personal touch. Researchers in the John A. Paulson School of Engineering and Applied Sciences (SEAS), together with physician-scientists at Massachusetts General Hospital and Harvard Medical School, have upgraded their wearable robot to be responsive to an individual user’s exact movements, endowing the device with more personalized assistance that could give users better, more controlled support for daily tasks.


Disordered-guiding photonic chip enabled high-dimensional light field detection

Intensity, polarization, and spectrum of light, as distinct dimensional characteristics, provide a comprehensive understanding of light-matter interaction and are crucial across nearly all domains of optical science and technology1,2,3,4. For instance, the polarization information5 is critical for determining material composition and surface texture, whereas spectral analysis is instrumental in medical diagnosis and wavelength-division optical communication6. As modern technology rapidly advances, the demand for comprehensive detection of high-dimensional light field continues to grow7,8.

Conventional detection devices typically measure either spectrum or polarization of input light, sacrificing the valuable information from other dimensions. A common solution is to incorporate multiple discrete diffraction elements and optical filters to separately distinguish light with different polarization and wavelength9,10,11,12. However, this leads to bulky and time-consuming systems. Recently, several integrated high-dimensional detectors based on optical metasurfaces13 have been proposed, and the typical representative relies on mapping different dimensional information into distinct locations, using position and intensity distributions for light field detection14,15. However, as the number of detection parameters increases, the signal crosstalk between different information at different spatial locations become pronounced16,17,18. Another type of detector, based on computational reconstruction, maps light field into a series of outputs, encoding the entire high-dimensional information rather than isolating individual dimension19,20,21,22. Nevertheless, these systems are generally restricted to detecting light fields at a few values with low resolution in each dimension, such as limited polarization and wavelength channels, due to the limited internal degrees freedom in the encoding devices23,24. Additionally, most of them rely on commercial cameras, inevitably requiring numerous detector arrays25. Consequently, achieving fully high-dimensional characterization of arbitrary complex light field with a compact and efficient system remains challenging.

In this work, we propose and demonstrate an on-chip high-dimensional detection system capable of characterizing broadband spectrum along with arbitrary varying full-Stokes polarization through single-shot measurement. The high-dimensional input is encoded into multi-channel intensities through the uniquely designed disordered-guiding chip, and decoded by a multilayer perceptron (MLP) neural network (Fig. 1). The core disordered region introduces complex interference between two separate orthogonal polarization components and multiple scattering to enhance the dispersion effect, enabling rich polarization and spectrum responses. Whereas, the surrounding guiding region based on inverse-design directs the input light to the on-chip photodetectors (PDs), improving the transmittance and detection efficiency. With the assistance of neural network for decoding, we achieve reconstruction of full-Stokes polarization and broadband spectrum with a single measurement. It reveals a high spectral sensitivity of 400 pm with average spectral error of 0.083, and polarization error of 1.2°. Furthermore, we demonstrate a high-dimensional imaging system, exhibiting superior imaging and recognition capabilities compared to conventional single-dimensional detectors. This demonstration holds promising potential for future imaging and sensing applications.

Machine Learning Interatomic Potentials in Computational Materials

Machine learning interatomic potentials (MLIPs) have become an essential tool to enable long-time scale simulations of materials and molecules at unprecedented accuracies. The aim of this collection is to showcase cutting-edge developments in MLIP architectures, data generation techniques, and innovative sampling methods that push the boundaries of accuracy, efficiency, and applicability in atomic-scale simulations.

New AI model advances fusion power research by predicting the success of experiments

Practical fusion power that can provide cheap, clean energy could be a step closer thanks to artificial intelligence. Scientists at Lawrence Livermore National Laboratory have developed a deep learning model that accurately predicted the results of a nuclear fusion experiment conducted in 2022. Accurate predictions can help speed up the design of new experiments and accelerate the quest for this virtually limitless energy source.

In a paper published in Science, researchers describe how their AI model predicted with a probability of 74% that ignition was the likely outcome of a small 2022 fusion experiment at the National Ignition Facility (NIF). This is a significant advance as the model was able to cover more parameters with greater precision than traditional supercomputers.

Currently, nuclear power comes from nuclear fission, which generates energy by splitting atoms. However, it can produce radioactive waste that remains dangerous for thousands of years. Fusion generates energy by fusing atoms, similar to what happens inside the sun. The process is safer and does not produce any long-term radioactive waste. While it is a promising energy source, it is still a long way from being a viable commercial technology.

Knitted textile metasurfaces allow soft robots to morph and camouflage on demand

Nature, particularly humans and other animals, has always been among the primary sources of inspiration for roboticists. In fact, most existing robots physically resemble specific animals and/or are engineered to tackle tasks by emulating the actions, movements and behaviors of specific species.

One innate ability of some animals that has so far been seldom replicated in robots is shape morphing and camouflaging. Some living organisms, including some insects, octopuses and chameleons, are known to reversibly change their appearance, form and shape in response to their surroundings, whether to hide from predators, move objects or simply while moving in specific environments.

Researchers at Jiangnan University, Technical University of Dresden, Laurentian University and the Shanghai International Fashion Education Center recently designed new flexible and programmable metasurfaces that could be used to develop robots exhibiting similar morphing and camouflaging capabilities. These materials, introduced in a paper published in Advanced Fiber Materials, essentially consist of knitted structures that can be carefully engineered by adapting the geometric arrangement of their underlying interlaced yarn loops.

/* */