Toggle light / dark theme

Association of blood-based DNA methylation of lncRNAs with Alzheimer’s disease diagnosis

DNA methylation has shown great potential in Alzheimer’s disease (AD) blood diagnosis. However, the ability of long non-coding RNAs (lncRNAs), which can be modified by DNA methylation, to serve as noninvasive biomarkers for AD diagnosis remains unclear.

We performed logistic regression analysis of DNA methylation data from the blood of patients with AD compared and normal controls to identify epigenetically regulated (ER) lncRNAs. Through five machine learning algorithms, we prioritized ER lncRNAs associated with AD diagnosis. An AD blood diagnosis model was constructed based on lncRNA methylation in Australian Imaging, Biomarkers, and Lifestyle (AIBL) subject and verified in two large blood-based studies, the European collaboration for the discovery of novel biomarkers for Alzheimer’s disease (AddNeuroMed) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI). In addition, the potential biological functions and clinical associations of lncRNAs were explored, and their neuropathological roles in AD brain tissue were estimated via cross-tissue analysis.

We characterized the ER lncRNA landscape in AD blood, which is strongly related to AD occurrence and process. Fifteen ER lncRNAs were prioritized to construct an AD blood diagnostic and nomogram model. The receiver operating characteristic (ROC) curve and the decision and calibration curves show that the model has good prediction performance. We found that the targets and lncRNAs were correlated with AD clinical features. Moreover, cross-tissue analysis revealed that the lncRNA ENSG0000029584 plays both diagnostic and neuropathological roles in AD.

Humans and artificial neural networks exhibit some similar patterns during learning

Past psychology and behavioral science studies have identified various ways in which people’s acquisition of new knowledge can be disrupted. One of these, known as interference, occurs when humans are learning new information and this makes it harder for them to correctly recall knowledge that they had acquired earlier.

Interestingly, a similar tendency was also observed in artificial neural networks (ANNs), computational models inspired by biological neurons and the connections between them. In ANNs, interference can manifest as so-called catastrophic forgetting, a process via which models “unlearn” specific skills or information after they are trained on a new task.

In some other instances, knowledge acquired in the past can instead help humans or ANNs to learn how to complete a new task. This phenomenon, known as “transfer,” entails the application of existing knowledge of skills to a novel task or problem.

Pluribus: The Terrifying 4-Step Plan to Devour the Universe

This video explains the leading theory about the origins of Pluribus and the hive mind’s ultimate purpose. Its terrifying plan unfolds in 4 steps. If you’re fascinated by hard sci-fi, the Dark Forest Hypothesis and alien civilizations, then this deep dive is for you.

This is a commentary video about the Plur1bus TV series streaming on Apple TV.

Chapters:
00:27 Step 1 — The Joining.
01:38 Step 2 — The Megastructure Antenna.
02:50 Step 3 — Interstellar Hive Mind.
04:10 Step 4 — The Universal Mind.

Footage:
Produced in part with SpaceEngine PRO © Cosmographic Software LLC.
Some elements in this video are also made with the help of artificial intelligence.

Music:
\

World’s first fast-neutron nuclear reactor to power AI data centers

French startup Stellaria secures its first power reservation from Equinix for Stellarium, the world’s first fast-neutron reactor that reduces nuclear waste.

The agreement will allow Equinix data centres to leverage the reactor’s energy autonomy, supporting sustainable, decarbonized operations and powering AI capabilities with clean nuclear energy.

The Stellarium reactor, proposed by Stellaria, is a fourth-generation fast-neutron molten-salt design that uses liquid chloride salt fuel and is engineered to operate on a closed fuel cycle.

TACC’s “Horizon” Supercomputer Sets The Pace For Academic Science

As we expected, the “Vista” supercomputer that the Texas Advanced Computing Center installed last year as a bridge between the current “Stampede-3” and “Frontera” production system and its future “Horizon” system coming next year was indeed a precursor of the architecture that TACC would choose for the Horizon machine.

What TACC does – and doesn’t do – matters because as the flagship datacenter for academic supercomputing at the National Science Foundation, the company sets the pace for those HPC organizations that need to embrace AI and that have not only large jobs that require an entire system to run (so-called capability-class machines) but also have a wide diversity of smaller jobs that need to be stacked up and pushed through the system (making it also a capacity-class system). As the prior six major supercomputers installed at TACC aptly demonstrate, you can have the best of both worlds, although you do have to make different architectural choices (based on technology and economics) to accomplish what is arguably a tougher set of goals.

Some details of the Horizon machine were revealed at the SC25 supercomputing conference last week, which we have been mulling over, but there are still a lot of things that we don’t know. The Horizon that will be fired up in the spring of 2026 is a bit different than we expected, with the big change being a downshift from an expected 400 petaflops of peak FP64 floating point performance down to 300 petaflops. TACC has not explained the difference, but it might have something to do with the increasing costs of GPU-accelerated systems. As far as we know, the budget for the Horizon system, which was set in July 2024 and which includes facilities rental from Sabey Data Centers as well as other operational costs, is still $457 million. (We are attempting to confirm this as we write, but in the wake of SC25 and ahead of the Thanksgiving vacation, it is hard to reach people.)

Google Quantum AI realizes three dynamic surface code implementations

Quantum computers are computing systems that process information leveraging quantum mechanical effects. These computers rely on qubits (i.e., the quantum equivalent of bits), which can store information in a mixture of states, as opposed to binary states (0 or 1).

While quantum computers could tackle some computational and optimization problems faster and more effectively than classical computers, they are also inherently more prone to errors. This is because qubits can be easily disturbed by disturbances from their surrounding environment, also referred to as noise.

Over the past decades, quantum engineers and physicists have been trying to develop approaches to correct noise-related errors, also known as quantum error correction (QEC) techniques. While some of these codes achieved promising results in small-scale tests, reliably implementing them on real circuits is often challenging.

Tiny reconfigurable robots can help manage carbon dioxide levels in confined spaces

Vehicles and buildings designed to enable survival in extreme environments, such as spacecraft, submarines and sealed shelters, heavily rely on systems for the management of carbon dioxide (CO2). These are technologies that can remove and release CO2, ensuring that the air remains breathable for a long time.

Most existing systems for the capture and release of CO2 consume a lot of energy, as they rely on materials that need to be heated to high temperatures to release the gas again after capturing it. Some engineers have thus been trying to devise more energy-efficient methods to manage CO2 in confined spaces.

Researchers at Guangxi University in China have developed new reconfigurable micro/nano-robots that can reversibly capture CO2 at significantly lower temperatures than currently used carbon management systems.

BrainBody-LLM algorithm helps robots mimic human-like planning and movement

Large language models (LLMs), such as the model underpinning the functioning of OpenAI’s platform ChatGPT, are now widely used to tackle a wide range of tasks, ranging from sourcing information to the generation of texts in different languages and even code. Many scientists and engineers also started using these models to conduct research or advance other technologies.

In the context of robotics, LLMs have been found to be promising for the creation of robot policies derived from a user’s instructions. Policies are essentially “rules” that a robot needs to follow to correctly perform desired actions.

Researchers at NYU Tandon School of Engineering recently introduced a new algorithm called BrainBody-LLM, which leverages LLMs to plan and refine the execution of a robot’s actions. The new algorithm, presented in a paper published in Advanced Robotics Research, draws inspiration from how the human brain plans actions and fine-tunes the body’s movements over time.

Researchers pioneer pathway to mechanical intelligence by breaking symmetry in soft composite materials

A research team has developed soft composite systems with highly programmable, asymmetric mechanical responses. By integrating “shear-jamming transitions” into compliant polymeric solids, this innovative work enhances key material functionalities essential for engineering mechano-intelligent systems—a major step toward the development of next-generation smart materials and devices.

The work is published in the journal Nature Materials.

In engineering fields such as soft robotics, synthetic tissues, and flexible electronics, materials that exhibit direction-dependent responses to external stimuli are crucial for realizing intelligent functions.

Intelligent photodetectors ‘sniff and seek’ like retriever dogs to recognize materials directly from light spectra

Researchers at the University of California, Los Angeles (UCLA), in collaboration with UC Berkeley, have developed a new type of intelligent image sensor that can perform machine-learning inference during the act of photodetection itself.

Reported in Science, the breakthrough redefines how spectral imaging, machine vision and AI can be integrated within a single semiconductor device.

Traditionally, spectral cameras capture a dense stack of images, each image corresponding to a different wavelength, and then transfer this large dataset to digital processors for computation and scene analysis. This workflow, while powerful, creates a severe bottleneck: the hardware must move and process massive amounts of data, which limits speed, power efficiency, and the achievable spatial–spectral resolution.

/* */