Toggle light / dark theme

WORLDCHANGING Space Energy Supercharges AI! What it means for Nvidia, Tesla and Other AI Companies

Elon Musk plans to launch solar-powered AI satellites that could provide a nearly limitless source of energy to supercharge AI processing capacity, potentially disrupting traditional energy production and benefiting companies like Nvidia and Tesla ## ## Questions to inspire discussion.

Space Solar Power Economics.

🚀 Q: What’s the projected cost trajectory for space-based solar power? A: SpaceX could achieve $10 per watt for space solar by 2030–2032, down from previously estimated $100 per watt, with ultimate target of $1 per watt for operational systems, requiring 3–4 orders of magnitude cost reduction through Wright’s Law.

💰 Q: How much would launching 1 terawatt of space solar cost? A: Launching 1 terawatt of space solar power requires $1 trillion in launch costs alone, not including manufacturing and operational expenses.

⚡ Q: What energy advantage does space solar have over ground-based systems? A: Space solar plants generate 10x more energy than ground-based sources by operating 24/7 with double intensity, each equivalent to a nuclear power plant in output.

SpaceX Launch Capacity and Timeline.

How AI Crossed the Limits of Human Intelligence

When I look back at how computing started, I remember a time when I could see the entire problem in my head and simply tell the machine exactly what to do. That world no longer exists. We allowed our systems to make their own decisions, and with that, we crossed into a new era of autonomy.

In this talk, I explain how AI moved beyond imitation and began outperforming us in ways we once believed were uniquely human. I walk through the early intelligence tests, the exponential doubling of capabilities, and the moment we touched the threshold of AGI. Most importantly, I explore what this means for us as a species.

I am neither optimistic nor pessimistic. I am realistic. The coming years will be challenging before they become extraordinary.
And the outcome will depend far more on humanity than on the machines we’ve created.

In this video I talk about:
• how we shifted from rule-based programming to autonomous decision-making.
• the first AI IQ and capability tests and what they revealed.
• why AI abilities doubled roughly every 5.7 months.
• the early evidence of AGI and what it truly represents.
• the meaning of the singularity and why we’re already feeling its effects.
• the two eras ahead: augmented intelligence and machine supremacy.
• why humanity’s biggest risk is not AI’s intelligence, but our own stupidity.
• how we can guide this transition wisely instead of fearfully.

ENHANCED HUMANS EXIST: How BioViva Is Quietly Upgrading Humanity

Genetic engineering and human enhancement are no longer science fiction — they’re here right now. In this episode of Longevity Science News, we explore the rise of gene therapy, anti-aging biotechnology, and the first wave of GMO Humans using real genetic enhancements to increase muscle, extend telomeres, boost IQ, and slow biological aging.

If you’re interested in longevity, life extension, biohacking, genetic modification, or cutting-edge anti-aging research, this video breaks down everything you need to know about the future of human evolution — and the people already jumping in.

HUME BODY ANALYZER:
Use Code: LONGEVITY for up to 50% OFF
https://humehealth.com//discount/LONG… FEATURED: BioViva Keynote by Liz Parrish Watch the full keynote here: • The First Person to Take Gene Therapy for… This talk covers viral vectors, telomere extension, muscle-growth gene therapies, cognitive enhancement, dementia treatment, and the global expansion of experimental genetic clinics. Chapters: 00:00 – Cold Open — FDA Gene Cures 00:35 – Liz Parrish & BioViva 01:35 – Sebastian A. Brunemeier 02:48 – HUME Body Pod 03:55 – Currently Available Genetic Cures 04:48 – How To Get Access 08:00 – Safety & Pricing 08:22 – Right to Try Debate 09:20 – Follistatin Results 10:50 – Telomere Extension 11:50 – Klotho & IQ Boost 13:26 – IQ & Society 14:35 – Dementia Gene Therapy 15:40 – Custom Therapies 16:20 – Conclusion • FDA-approved genetic cures • BioViva’s gene enhancement results • Follistatin gene therapy for muscle growth • Telomerase (TERT) for biological age reversal • Klotho gene therapy for cognitive enhancement • Dementia gene therapy case studies • Medical tourism for experimental gene treatments • How to access unapproved gene therapies • AI’s role in designing next-gen genetic interventions • Personalized & bespoke gene therapies • Ethical questions about enhancing IQ, strength, and lifespan • The future of human evolution & GMO humans 👤 EXPERTS & SOURCES FEATURED Liz Parrish — BioViva Sciences LinkedIn: / lizlparrish Sebastian Brunemeier — Cambrian Bio / Long Game Ventures LinkedIn: / sebastianlongbio Long Game Ventures: / longgame-vc Wired Magazine — Medical Tourism & Gene Therapy Pricing https://www.wired.com/story/bioviva-g… Extended Interview: Montana Senator Ken Bogner • Ken Bogner Full Interview 🔗 FULL INTERVIEWS & BONUS CONTENT Get extended conversations, deep dives, and behind-the-scenes research on Patreon: 👉 / u29506604 💬 JOIN THE DISCUSSION Would you use gene therapy to slow aging? Would you enhance your muscle, intelligence, or longevity? Do you think we should expand access to experimental anti-aging treatments? Let me know in the comments. 🧪 Longevity Science News PRODUCTION CREDITS ⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺⎺ Executive Producer – Keith Comito ‪@Retromancers‬ Host, Producer, Writer – ‪@emmettshort

🔬 FEATURED: BioViva Keynote by Liz Parrish.
Watch the full keynote here:
• The First Person to Take Gene Therapy for…

This talk covers viral vectors, telomere extension, muscle-growth gene therapies, cognitive enhancement, dementia treatment, and the global expansion of experimental genetic clinics.

Chapters:

Association of blood-based DNA methylation of lncRNAs with Alzheimer’s disease diagnosis

DNA methylation has shown great potential in Alzheimer’s disease (AD) blood diagnosis. However, the ability of long non-coding RNAs (lncRNAs), which can be modified by DNA methylation, to serve as noninvasive biomarkers for AD diagnosis remains unclear.

We performed logistic regression analysis of DNA methylation data from the blood of patients with AD compared and normal controls to identify epigenetically regulated (ER) lncRNAs. Through five machine learning algorithms, we prioritized ER lncRNAs associated with AD diagnosis. An AD blood diagnosis model was constructed based on lncRNA methylation in Australian Imaging, Biomarkers, and Lifestyle (AIBL) subject and verified in two large blood-based studies, the European collaboration for the discovery of novel biomarkers for Alzheimer’s disease (AddNeuroMed) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI). In addition, the potential biological functions and clinical associations of lncRNAs were explored, and their neuropathological roles in AD brain tissue were estimated via cross-tissue analysis.

We characterized the ER lncRNA landscape in AD blood, which is strongly related to AD occurrence and process. Fifteen ER lncRNAs were prioritized to construct an AD blood diagnostic and nomogram model. The receiver operating characteristic (ROC) curve and the decision and calibration curves show that the model has good prediction performance. We found that the targets and lncRNAs were correlated with AD clinical features. Moreover, cross-tissue analysis revealed that the lncRNA ENSG0000029584 plays both diagnostic and neuropathological roles in AD.

Humans and artificial neural networks exhibit some similar patterns during learning

Past psychology and behavioral science studies have identified various ways in which people’s acquisition of new knowledge can be disrupted. One of these, known as interference, occurs when humans are learning new information and this makes it harder for them to correctly recall knowledge that they had acquired earlier.

Interestingly, a similar tendency was also observed in artificial neural networks (ANNs), computational models inspired by biological neurons and the connections between them. In ANNs, interference can manifest as so-called catastrophic forgetting, a process via which models “unlearn” specific skills or information after they are trained on a new task.

In some other instances, knowledge acquired in the past can instead help humans or ANNs to learn how to complete a new task. This phenomenon, known as “transfer,” entails the application of existing knowledge of skills to a novel task or problem.

Pluribus: The Terrifying 4-Step Plan to Devour the Universe

This video explains the leading theory about the origins of Pluribus and the hive mind’s ultimate purpose. Its terrifying plan unfolds in 4 steps. If you’re fascinated by hard sci-fi, the Dark Forest Hypothesis and alien civilizations, then this deep dive is for you.

This is a commentary video about the Plur1bus TV series streaming on Apple TV.

Chapters:
00:27 Step 1 — The Joining.
01:38 Step 2 — The Megastructure Antenna.
02:50 Step 3 — Interstellar Hive Mind.
04:10 Step 4 — The Universal Mind.

Footage:
Produced in part with SpaceEngine PRO © Cosmographic Software LLC.
Some elements in this video are also made with the help of artificial intelligence.

Music:
\

World’s first fast-neutron nuclear reactor to power AI data centers

French startup Stellaria secures its first power reservation from Equinix for Stellarium, the world’s first fast-neutron reactor that reduces nuclear waste.

The agreement will allow Equinix data centres to leverage the reactor’s energy autonomy, supporting sustainable, decarbonized operations and powering AI capabilities with clean nuclear energy.

The Stellarium reactor, proposed by Stellaria, is a fourth-generation fast-neutron molten-salt design that uses liquid chloride salt fuel and is engineered to operate on a closed fuel cycle.

TACC’s “Horizon” Supercomputer Sets The Pace For Academic Science

As we expected, the “Vista” supercomputer that the Texas Advanced Computing Center installed last year as a bridge between the current “Stampede-3” and “Frontera” production system and its future “Horizon” system coming next year was indeed a precursor of the architecture that TACC would choose for the Horizon machine.

What TACC does – and doesn’t do – matters because as the flagship datacenter for academic supercomputing at the National Science Foundation, the company sets the pace for those HPC organizations that need to embrace AI and that have not only large jobs that require an entire system to run (so-called capability-class machines) but also have a wide diversity of smaller jobs that need to be stacked up and pushed through the system (making it also a capacity-class system). As the prior six major supercomputers installed at TACC aptly demonstrate, you can have the best of both worlds, although you do have to make different architectural choices (based on technology and economics) to accomplish what is arguably a tougher set of goals.

Some details of the Horizon machine were revealed at the SC25 supercomputing conference last week, which we have been mulling over, but there are still a lot of things that we don’t know. The Horizon that will be fired up in the spring of 2026 is a bit different than we expected, with the big change being a downshift from an expected 400 petaflops of peak FP64 floating point performance down to 300 petaflops. TACC has not explained the difference, but it might have something to do with the increasing costs of GPU-accelerated systems. As far as we know, the budget for the Horizon system, which was set in July 2024 and which includes facilities rental from Sabey Data Centers as well as other operational costs, is still $457 million. (We are attempting to confirm this as we write, but in the wake of SC25 and ahead of the Thanksgiving vacation, it is hard to reach people.)

Google Quantum AI realizes three dynamic surface code implementations

Quantum computers are computing systems that process information leveraging quantum mechanical effects. These computers rely on qubits (i.e., the quantum equivalent of bits), which can store information in a mixture of states, as opposed to binary states (0 or 1).

While quantum computers could tackle some computational and optimization problems faster and more effectively than classical computers, they are also inherently more prone to errors. This is because qubits can be easily disturbed by disturbances from their surrounding environment, also referred to as noise.

Over the past decades, quantum engineers and physicists have been trying to develop approaches to correct noise-related errors, also known as quantum error correction (QEC) techniques. While some of these codes achieved promising results in small-scale tests, reliably implementing them on real circuits is often challenging.

Tiny reconfigurable robots can help manage carbon dioxide levels in confined spaces

Vehicles and buildings designed to enable survival in extreme environments, such as spacecraft, submarines and sealed shelters, heavily rely on systems for the management of carbon dioxide (CO2). These are technologies that can remove and release CO2, ensuring that the air remains breathable for a long time.

Most existing systems for the capture and release of CO2 consume a lot of energy, as they rely on materials that need to be heated to high temperatures to release the gas again after capturing it. Some engineers have thus been trying to devise more energy-efficient methods to manage CO2 in confined spaces.

Researchers at Guangxi University in China have developed new reconfigurable micro/nano-robots that can reversibly capture CO2 at significantly lower temperatures than currently used carbon management systems.

/* */