Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

Key protein can restore aging neural stem cells’ ability to regenerate

Researchers at the Yong Loo Lin School of Medicine, National University of Singapore (NUS Medicine), have found that a key protein can help to regenerate neural stem cells, which may improve aging-associated decline in neuronal production of an aging brain.

Published in Science Advances, the study identified a transcription factor in the brain, cyclin D-binding myb-like transcription factor 1 (DMTF1), as a critical driver of neural stem cell function during the aging process. Transcription factors are proteins that regulate genes to ensure that they are expressed correctly in the intended cells.

The study, led by Assistant Professor Ong Sek Tong Derrick and first author Dr. Liang Yajing, both from the Department of Physiology and the Healthy Longevity Translational Research Program at NUS Medicine, sought to identify biological factors that influence the degeneration of neural stem cell function often associated with aging, and guide the development of therapeutic approaches to mitigate the adverse effects of neurological aging.

Peripheral neuropathy protection by mitochondrial transfer from glia to neurons

For millions living with nerve pain, even a light touch can feel unbearable. Scientists have long suspected that damaged nerve cells falter because their energy factories known as mitochondria don’t function properly.

Now research published in Nature suggests a way forward: supplying healthy mitochondria to struggling nerve cells.

Using human tissue and mouse models, researchers found that replenishing mitochondria significantly reduced pain tied to diabetic neuropathy and chemotherapy-induced nerve damage. In some cases, the relief lasted up to 48 hours.

Instead of masking symptoms, the approach could fix what the team sees as the root problem — restoring the energy flow that keeps nerve cells healthy and resilient.

“By giving damaged nerves fresh mitochondria — or helping them make more of their own — we can reduce inflammation and support healing,” said the study’s senior author. “This approach has the potential to ease pain in a completely new way.

The work highlights a previously undocumented role for satellite glial cells, which appear to deliver mitochondria to sensory neurons through tiny channels called tunnelling nanotubes.

When this mitochondrial handoff is disrupted, nerve fibers begin to degenerate — triggering pain, tingling and numbness, often in the hands and feet, the distal ends of the nerve fibers.

Plant Discovery Could Transform How Medicines Are Made

Plants produce protective chemicals called alkaloids as part of their natural defenses. People have used these compounds for a long time, including in pain relief medicines, treatments for various diseases, and familiar household products such as caffeine and nicotine.

Scientists want to learn exactly how plants build alkaloids. With that knowledge, they hope to create new and improved medicine-related chemicals faster, at lower cost, and with less harm to the environment.

In a study at the University of York, researchers examined a plant called Flueggea suffruticosa, which makes an especially strong alkaloid known as securinine. As they traced how securinine is produced, the team found a surprise: a key step depends on a gene that resembles bacterial genes more than typical plant genes.

Enormous freshwater reservoir discovered off the East Coast may be 20,000 years old and big enough to supply NYC for 800 years

“The important part was we collected all the samples we need to address our primary questions,” Dugan said. “When we’re done drilling and we pull our equipment out, the holes collapse back in and seal themselves up.”

Now, scientists are studying the reservoir in finer detail, including any microbes, rare earth elements, pore space — which can help researchers better estimate the reservoir’s size — and the age of the sediments, which will help narrow down when it formed. More definitive results about how and when the reservoir formed are expected in about one month’s time, Dugan said.

“Our goal is to provide an understanding of the system so if and when somebody needs to use it, they have information to start from, rather than recreating information or making an ill-informed choice,” he said.

Meet the new biologists treating LLMs like aliens

How large is a large language model? Think about it this way.

In the center of San Francisco there’s a hill called Twin Peaks from which you can view nearly the entire city. Picture all of it—every block and intersection, every neighborhood and park, as far as you can see—covered in sheets of paper. Now picture that paper filled with numbers.

That’s one way to visualize a large language model, or at least a medium-size one: Printed out in 14-point type, a 200-­billion-parameter model, such as GPT4o (released by OpenAI in 2024), could fill 46 square miles of paper—roughly enough to cover San Francisco. The largest models would cover the city of Los Angeles.

We now coexist with machines so vast and so complicated that nobody quite understands what they are, how they work, or what they can really do—not even the people who help build them. “You can never really fully grasp it in a human brain,” says Dan Mossing, a research scientist at OpenAI.

That’s a problem. Even though nobody fully understands how it works—and thus exactly what its limitations might be—hundreds of millions of people now use this technology every day. If nobody knows how or why models spit out what they do, it’s hard to get a grip on their hallucinations or set up effective guardrails to keep them in check. It’s hard to know when (and when not) to trust them.

Whether you think the risks are existential—as many of the researchers driven to understand this technology do—or more mundane, such as the immediate danger that these models might push misinformation or seduce vulnerable people into harmful relationships, understanding how large language models work is more essential than ever.


/* */