Toggle light / dark theme

Terahertz spin waves can be converted into computer signals, study shows

What will the computers of tomorrow look like? Chances are good that spintronics will play a decisive role in the next generation of computers. In spintronics, the intrinsic angular momentum of an electron (the spin) is used to store, process and transmit data. This technology is already in use today, for example in hard drives. However, the scope of what is possible extends much further: More recent approaches aim at using not just individual spins, but entire spin waves made up of partly hundreds of trillions of spins. Such collective spin excitations are known as magnons. They could enable extremely energy-efficient data transmission—even in the terahertz range.

So far, so good. But how can these spin waves be coupled to today’s technology? “If we develop a concept to perform computer calculations with magnons, it must be compatible with the technology we currently use,” says physicist Davide Bossini from the University of Konstanz. “To reach this goal, you have to convert the spin wave into an electrical charge signal.” This spin-to-charge conversion is one of the major challenges of spintronics.

Most mass spectrometers can process just a few molecules at once: Reengineered prototype does a billion simultaneously

Mass spectrometry is already a powerful tool for determining what kind and how many molecules are present in a given sample. But most instruments still analyze their molecules one or just a few at a time, an approach that is inefficient and costly, and in which rare, but significant molecules can easily fall between the cracks.

A more powerful version of the technology could one day allow scientists to read the full molecular contents of a single cell, track thousands of chemical reactions at once, and ultimately accelerate efforts like drug development.

Now, a new study describes the first big step in that direction by producing a prototype, dubbed MultiQ-IT, that’s capable of handling vast numbers of molecules at once. The findings, published in the journal Science Advances, offer a blueprint for faster, more sensitive instruments that could position mass spectrometry for the kind of transformation that reshaped genomics and computing.

New “Giant Superatoms” Could Solve Quantum Computing’s Biggest Problem

A new quantum system called giant superatoms could protect quantum information and enable entanglement between multiple qubits. The concept merges giant atoms and superatoms to improve stability and scalability for future quantum technologies. Scientists at Chalmers University of Technology in Sw

Abstract: Decoding neurodegeneration one cell at a time

https://doi.org/10.1172/JCI199841 As part of the JCI’s Review Series on Neurodegeneration, Olivia Gautier, Thao P. Nguyen & Aaron D. Gitler explore the molecular basis for selective neuronal vulnerability and degeneration and summarize recent advances and applications of single-cell genomic approaches.


How do we decide whether we should choose single-cell or single-nucleus sequencing? This depends on sample types and biological applications. Single-cell sequencing is typically applied to fresh, readily dissociable tissues or cultured cells to study intact cell populations. Because it captures both cytoplasmic and nuclear transcripts, scRNA-seq provides a comprehensive view of cellular gene expression. However, tissue dissociation can induce stress-related transcriptional artifacts and introduce substantial cell-type bias. Large or fragile neurons are often lost during dissociation, whereas smaller cell types, such as astrocytes and oligodendrocytes, tend to be overrepresented. In contrast, single-nucleus sequencing is commonly used for frozen samples or for tissues that are difficult to dissociate, including the brain and spinal cord. Although fresh or fresh-frozen samples are typically used, snRNA-seq is compatible with formalin-fixed, paraffin-embedded (FFPE) samples, enabling the analysis of archived human specimens. A key limitation is that snRNA-seq does not capture cytoplasmic transcripts and is therefore biased toward nuclear, often premature, mRNA species.

Spatial transcriptomics does not require tissue dissociation and enables examination of cellular transcriptomes within their native tissue niches. Some spatial transcriptomic technologies are now compatible with FFPE samples, allowing analyses of preserved clinical specimens along with fixed-frozen and fresh-frozen samples. These technologies can be broadly classified into two main categories: imaging-based and sequencing-based (Figure 2B). Imaging-based approaches, like multiplexed error-robust fluorescence in situ hybridization (MERFISH), spatially resolved transcript amplicon readout mapping (STARmap), and 10x Genomics Xenium, rely on probe hybridization and multiplexed imaging to detect and visualize transcripts at high spatial resolution, often achieving single-cell or even subcellular resolution (17, 18). Although whole-transcriptome measurements are possible, MERFISH typically targets predefined gene panels due to the constraints of iterative hybridization and imaging. In contrast, sequencing-based approaches, including NanoString GeoMx and 10x Genomics Visium, capture RNA on spatially barcoded tissue slides or nanobeads followed by next-generation sequencing. These methods generally recover a broader range of transcripts than imaging-based approaches but, in most cases, do not yet achieve true single-cell resolution. Instead, they measure gene expression within spatial “spots” that encompass multiple cells and therefore rely on computational deconvolution to infer cell-type composition. Newer spatial transcriptomic methods, like spatial enhanced resolution omics sequencing (Stereo-seq) and reverse-padlock amplicon-encoding fluorescence in situ hybridization (RAEFISH), are approaching single-cell and single-molecule resolution (1921).

In this Review, we summarize recent advances and applications of single-cell genomics approaches to study neurodegenerative disorders, including Alzheimer disease (AD), Parkinson disease (PD), amyotrophic lateral sclerosis (ALS), frontotemporal dementia (FTD), and Huntington disease (HD). We focus on how these approaches provide insight into the unique vulnerabilities of specific neuronal populations, define novel disease-associated cellular states, and reveal contributions of non-neuronal cells to disease pathogenesis. We then look to the future, envisioning how these technologies will empower genetic screens to uncover modifiers of neurodegeneration and new therapeutic targets.

Inverse design: A new pathway to custom functional polymers

At a potluck, you ate the best chocolate chip cookie—golden-brown, thick and chewy. Unfortunately, you don’t know who made the cookie to get the recipe from, so you decide to recreate it. Using forward design principles, you might randomly choose a recipe from dozens of options, bake and observe the resulting cookies. If they are too thin, you might start over with a new recipe, add more flour or chill the dough longer and make a new batch. An alternative method is to start from the cookie characteristics you want and ask: What recipe and baking settings will produce that type of cookie? This method is called inverse design.

Nano 3D metallic parts turn out to be surprisingly strong despite defects

Scientists at Caltech have figured out how to precisely engineer tiny three-dimensional (3D) metallic pieces with nanoscale dimensions. The process can work with any metal or metal alloy and yields components of surprising strength despite having a porous and defect-ridden microstructure, making it potentially useful in a wide range of applications, including medical devices, computer chips, and equipment needed for space missions.

The scientists describe their method in a paper published in the journal Nature Communications. The work was completed in the lab of Julia R. Greer, the Ruben F. and Donna Mettler Professor of Materials Science, Mechanics and Medical Engineering at Caltech, and Huajian Gao of Tsinghua University in Beijing.

The researchers use a technique called two-photon lithography that allows them to sequentially build an object of a desired size and shape by carefully controlling the geometry at the level of individual voxels, the smallest distinguishable volumes, or features, in a 3D image. Beginning with a light-sensitive liquid, the scientists use a tightly focused femtosecond laser beam—a femtosecond is 1 quadrillionth of a second—to build a desired shape out of a gel-like material called hydrogel. After infusing the miniature hydrogel sculpture with metallic salts, such as copper nitrate or nickel nitrate, they heat the structure twice in a specialized furnace to produce a shrunken metallic replica of the original shape.

Apple pushes first Background Security Improvements update to fix WebKit flaw

Apple has released its first Background Security Improvements update to fix a WebKit flaw tracked as CVE-2026–20643 on iPhones, iPads, and Macs without requiring a full operating system upgrade.

The CVE-2026–20643 flaw allows malicious web content to bypass the browser’s Same Origin Policy.

Apple says the flaw is a cross-origin issue in the Navigation API that was addressed with improved input validation.

Perovskite crystals can host qubits, challenging long-held assumptions

For the first time, researchers have demonstrated that the properties of the perovskite family of materials can be used to create so-called quantum bits. The findings, published in the journal Nature Communications, pave the way for more affordable materials in future quantum computers.

According to the researchers from Linköping University, Sweden, behind the study, few within the field believed it would be possible. The reason is that the atoms in perovskite materials should, in theory, interact so strongly that the qubit would collapse before the calculation could be completed. However, the experiments conducted by the Linköping team show that it works.

“Our findings open up an entirely new research field,” says Yuttapoom Puttisong, associate professor at Linköping University.

Three anesthesia drugs all have the same effect in the brain, MIT researchers find

When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.

This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.

“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

Miller, Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience Emery Brown, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.

Excellent work Earl Miller and team!

Read More

/* */