Toggle light / dark theme

High trust in AI leaves individuals vulnerable to “cognitive surrender,” study finds

People are increasingly outsourcing their thinking to artificial intelligence, bypassing critical reflection entirely. New research reveals that this “cognitive surrender” inflates confidence and causes users to blindly adopt algorithm-generated answers, even when the software is wrong.

A new way to understand the evolution of spacetime dynamics

The concept of spacetime, first described in Einstein’s theory of general relativity, has since been widely studied by many physicists worldwide. Spacetime is described mathematically as a four-dimensional (4D) continuum in which physical events occur, which merges three-dimensional (3D) space, with one-dimensional (1D) time.

This 4D continuum is known to continuously evolve following complex and intricate patterns that are governed by Einstein’s field equations; mathematical equations that describe how matter and energy shape spacetime. While various past theoretical studies explored the evolution of spacetime, identifying patterns that persist during its evolution has proved challenging so far.

Researchers at Adolfo Ibáñez University in Chile and Columbia University set out to explore the evolution of spacetime using ideas rooted in nonlinear electrodynamics, an area of physics that studies the behavior of electric and magnetic fields in complex materials.

Topological Origin of Cosmological Constant ( Dark Energy)

Shape of the universe and Cosmological Constant.


🚨 The Biggest Problem in Physics (Cosmological Constant) https://lnkd.in/gt7tEpJw ❓ Problem: Why is the Universe accelerating… and why is the value so unbelievably small? Observations (supernovae, CMB, BAO) show: 👉 The expansion is accelerating 👉 This requires a cosmological constant Λ From Einstein’s equation: Λ = 8πG ρ_Λ 😳 But here’s the crisis: Quantum physics predicts vacuum energy: ρ_vac ≈ M_Pl⁴ But observations give: ρ_Λ ≈ 10⁻¹²⁰ M_Pl⁴ 💥 That’s a mismatch of 120 orders of magnitude This is called the cosmological constant problem 🧠 Standard thinking fails because: We assume: 👉 Energy fills space uniformly 👉 Λ comes from summing quantum fluctuations ρ_vac = (1/V) Σ (½ ℏωₖ) But this diverges → way too large ❌ 💡 A different perspective (EWOG insight): Instead of asking: 👉 “What is the energy of empty space?” Ask: 👉 “What is the geometry of the Universe?

Better volcano eruption predictions on Earth—and Venus—thanks to Mauna Loa study

When Mauna Loa erupted in 2022, the largest lava flow headed on a path headed directly toward Daniel K. Inouye State Highway 200, also known as Saddle Road, a critical route that carries many residents from their homes on one side to their jobs on the other.

No one could accurately predict whether the lava would continue to flow and eventually block the highway, or stop short, sparing the road.

However, when the volcano next erupts scientists will be better able to monitor the eruption in real time and make more accurate predictions about where the lava will flow and when the volcano might erupt. These advances are thanks to the availability of satellite data from public and private sources as well as machine learning algorithms developed at Pitt with help from a colleague in Italy, as highlighted in a recent publication in the Journal of Volcanology and Geothermal Research.

Quantum-informed machine learning for predicting spatiotemporal chaos with practical quantum advantage

Ultimately, QIML proves that we don’t need a fully fault-tolerant quantum computer to see results. By using quantum processors to learn the complex “rules” of chaos, we can give classical computers the boost they need to make reliable, long-term predictions about the most turbulent environments in the natural world.


Modeling high-dimensional dynamical systems remains one of the most persistent challenges in computational science. Partial differential equations (PDEs) provide the mathematical backbone for describing a wide range of nonlinear, spatiotemporal processes across scientific and engineering domains (13). However, high-dimensional systems are notoriously sensitive to initial conditions and the floating-point numbers used to compute them (47), making it highly challenging to extract stable, predictive models from data. Modern machine learning (ML) techniques often struggle in this regime: While they may fit short-term trajectories, they fail to learn the invariant statistical properties that govern long-term system behavior. These challenges are compounded in high-dimensional settings, where data are highly nonlinear and contain complex multiscale spatiotemporal correlations.

ML has seen transformative success in domains such as large language models (8, 9), computer vision (10, 11), and weather forecasting (1215), and it is increasingly being adopted in scientific disciplines under the umbrella of scientific ML (16). In fluid mechanics, in particular, ML has been used to model complex flow phenomena, including wall modeling (17, 18), subgrid-scale turbulence (19, 20), and direct flow field generation (21, 22). Physics-informed neural networks (23, 24) attempt to inject domain knowledge into the learning process, yet even these models struggle with the long-term stability and generalization issues that high-dimensional dynamical systems demand. To address this, generative models such as generative adversarial networks (25) and operator-learning architectures such as DeepONet (26) and Fourier neural operators (FNO) (27) have been proposed. While neural operators offer discretization invariance and strong representational power for PDE-based systems, they still suffer from error accumulation and prediction divergence over long horizons, particularly in turbulent and other chaotic regimes (28, 29). Recent work, such as DySLIM (30), enhances stability by leveraging invariant statistical measures. However, these methods depend on estimating such measures from trajectory samples, which can be computationally intensive and inaccurate in all forms of chaotic systems, especially in high-dimensional cases. These limitations have prompted exploration into alternative computational paradigms. Quantum machine learning (QML) has emerged as a possible candidate due to its ability to represent and manipulate high-dimensional probability distributions in Hilbert space (31). Quantum circuits can exploit entanglement and interference to express rich, nonlocal statistical dependencies using fewer parameters than their promising counterparts, which makes them well suited for capturing invariant measures in high-dimensional dynamical systems, where long-range correlations and multimodal distributions frequently arise (32). QML and quantum-inspired ML have already demonstrated potential in fields such as quantum chemistry (33, 34), combinatorial optimization (35, 36), and generative modeling (37, 38). However, the field is constrained on two fronts: Fully quantum approaches are limited by noisy intermediate-scale quantum (NISQ) hardware noise and scalability (39), while quantum-inspired algorithms, being classical simulations, cannot natively leverage crucial quantum effects such as entanglement to efficiently represent the complex, nonlocal correlations found in such systems. These challenges limit the standalone utility of QML in scientific applications today. Instead, hybrid quantum-classical models provide a promising compromise, where quantum submodules work together with classical learning pipelines to improve expressivity, data efficiency, and physical fidelity. In quantum chemistry, this hybrid paradigm has proven feasible, notably through quantum mechanical/molecular mechanical coupling (40, 41), where classical force fields are augmented with quantum corrections. Within such frameworks, techniques such as quantum-selected configuration interaction (42) have been used to enhance accuracy while keeping the quantum resource requirements tractable. In the broader landscape of quantum computational fluid dynamics, progress has been made toward developing full quantum solvers for nonlinear PDEs. Recent works by Liu et al. (43) and Sanavio et al. (44, 45) have successfully applied Carleman linearization to the lattice Boltzmann equation, offering a promising pathway for simulating fluid flows at moderate Reynolds numbers. These approaches, typically using algorithms such as Harrow-Hassidim-Lloyd (HHL) (46), promise exponential speedups but generally necessitate deep circuits and fault-tolerant hardware.

Quantum-enhanced machine learning (QEML) combines the representational richness of quantum models with the scalability of classical learning. By leveraging uniquely quantum properties such as superposition and entanglement, QEML can explore richer feature spaces and capture complex correlations that are challenging for purely classical models. Recent successes in quantum-enhanced drug discovery (37), where hybrid quantum-classical generative models have produced experimentally validated candidates rivaling state-of-the-art classical methods, demonstrate the practical potential of QEML even before full quantum advantage is achieved. Despite these strengths, practical barriers remain. QEML pipelines require repeated quantum-classical communication during training and rely on costly quantum data-embedding and measurement steps, which slow computation and limit accessibility across research institutions.

Bridging structure and function: artificial intelligence-based modelling of kidney proteins

Advances in artificial intelligence-driven algorithms and experimental technologies have revolutionized the field of protein modelling. This Review describes how these developments have provided unprecedented insights into the structure of key proteins within the kidney, improved understanding of the relationships between protein structure and stability, and enabled mechanistic interpretation of variants that underlie a variety of kidney pathologies.

New study bridges the worlds of classical and quantum physics

When you throw a ball in the air, the equations of classical physics will tell you exactly what path the ball will take as it falls, and when and where it will land. But if you were to squeeze that same ball down to the size of an atom or smaller, it would behave in ways beyond anything that classical physics can predict.

Or so we’ve thought.

MIT scientists have now shown that certain mathematical ideas from everyday classical physics can be used to describe the often weird and nonintuitive behavior that occurs at the quantum, subatomic scale.

On computing quantum waves exactly from classical action

The fundamental quantum postulates on the existence of a wave function, its propagation with the Schrödinger equation in theorem 3.2 and the wave collapse at a measurement in lemma 3.3 are derived from the classical theorem 2.4. Furthermore, analytic computations of the classical action are simpler than solving the Feynman path integral and potentially easier than solving the Schrödinger equation directly. In addition, theorem 3.2 is a multi-particle result.

The J classical multipaths in theorem 3.2 and lemma 3.3 are strictly determined by the initial and final conditions. In the double slit experiment, the probabilistic quantum observation results from the non-Lipschitz constraint force in the slit. For the harmonic oscillator, the Coulomb wave, the particle in the box, or the spinning particle, the initial probabilistic density distribution is classically propagated forward in time. In the EPR experiment [64,65], theorem 2.4 determines a constant angular momentum χo↑,χo↓ over time, and lemma 3.3 in turn allows a classical interpretation that the decision which spin correlation is sensed behind the filters is already taken when the particles separate.

The Universe Is Accelerating…and No One Knows Why

Support the Research Behind this Channel on Patreon:
/ arvinash.

REFERENCES
How black holes may be responsible for Dark Energy • How BLACK HOLES May be Responsible for DAR…
Is Dark Energy made of particles? • Is Dark ENERGY made of PARTICLES? The Quin…
What is Dark Energy made of? • What is Dark Energy made of? Quintessence?… CHAPTERS 0:00 The 70% mystery 0:58 How Dark Energy was discovered? 4:26 What could be causing Dark Energy? 6:58 Repulsive Gravity? 10:16 What is the energy made of? 11:56 Evolving Dark energy? Quintesssence 14:18 Could Dark Energy be a particle? 16:43 Could Black Holes cause Dark Energy? SUMMARY Dark energy is one of the greatest mysteries in modern physics. It appears to make up nearly 70% of the universe, yet scientists still do not know what it is. Unlike matter, it does not clump together. Unlike radiation, it does not dilute as space expands. Instead, it causes the expansion of the universe to accelerate, pushing galaxies apart faster over time. The discovery of this acceleration came in the late 1990s when astronomers measured distant Type Ia supernovae, which act as reliable “standard candles.” By comparing their brightness and redshift, researchers could determine how fast the universe expanded at different points in cosmic history. Instead of finding that gravity slowed expansion—as expected—they discovered the opposite: the universe was expanding faster and faster. This unexpected result led to the concept of dark energy, the unknown driver behind cosmic acceleration. One possible explanation is that dark energy is a cosmological constant, represented by the Greek letter lambda in Einstein’s equations. In this model, empty space itself contains a constant energy density known as vacuum energy. Quantum mechanics predicts that empty space is not truly empty; quantum fields constantly fluctuate, producing short-lived “virtual particles.” These fluctuations create energy even in a vacuum. Experiments like the Casimir effect provide evidence that vacuum energy is real. However, this explanation has a major problem. When physicists calculate vacuum energy using quantum theory, the predicted value is about 10¹²⁰ times larger than what observations of the universe allow. This enormous mismatch is widely considered the worst prediction in physics. In general relativity, cosmic acceleration can occur if the universe contains energy with negative pressure. In the Friedmann equation, expansion accelerates when pressure is sufficiently negative relative to energy density. Dark energy appears to have exactly this property, effectively producing a form of repulsive gravity that stretches spacetime. Another possibility is that dark energy is not constant but comes from a dynamic field known as quintessence. In quantum theory, fields can have particle-like excitations, meaning dark energy might correspond to extremely weakly interacting particles. If the strength of this field changes over time, the acceleration of the universe could grow stronger. In extreme scenarios, this could eventually lead to a catastrophic future known as the Big Rip, where galaxies, stars, atoms, and even spacetime itself are torn apart. A more speculative idea suggests a connection between supermassive black holes and dark energy. Some recent studies have observed that black holes appear to grow more massive over billions of years than expected from normal matter accretion alone. Researchers have proposed that black holes might somehow be linked to dark energy, though current evidence only shows a correlation and not a confirmed causal explanation. #darkenergy For now, dark energy remains an observed phenomenon with multiple possible explanations. Whether it is a property of empty space, a new field of physics, or something even deeper, it stands as one of the most profound open questions in cosmology.

CHAPTERS
0:00 The 70% mystery
0:58 How Dark Energy was discovered?
4:26 What could be causing Dark Energy?
6:58 Repulsive Gravity?
10:16 What is the energy made of?
11:56 Evolving Dark energy? Quintesssence
14:18 Could Dark Energy be a particle?
16:43 Could Black Holes cause Dark Energy?

SUMMARY
Dark energy is one of the greatest mysteries in modern physics. It appears to make up nearly 70% of the universe, yet scientists still do not know what it is. Unlike matter, it does not clump together. Unlike radiation, it does not dilute as space expands. Instead, it causes the expansion of the universe to accelerate, pushing galaxies apart faster over time.

The discovery of this acceleration came in the late 1990s when astronomers measured distant Type Ia supernovae, which act as reliable “standard candles.” By comparing their brightness and redshift, researchers could determine how fast the universe expanded at different points in cosmic history. Instead of finding that gravity slowed expansion—as expected—they discovered the opposite: the universe was expanding faster and faster. This unexpected result led to the concept of dark energy, the unknown driver behind cosmic acceleration.

One possible explanation is that dark energy is a cosmological constant, represented by the Greek letter lambda in Einstein’s equations. In this model, empty space itself contains a constant energy density known as vacuum energy. Quantum mechanics predicts that empty space is not truly empty; quantum fields constantly fluctuate, producing short-lived “virtual particles.” These fluctuations create energy even in a vacuum. Experiments like the Casimir effect provide evidence that vacuum energy is real.

Classical physics can explain quantum weirdness, study shows

When you throw a ball in the air, the equations of classical physics will tell you exactly what path the ball will take as it falls, and when and where it will land. But if you were to squeeze that same ball down to the size of an atom or smaller, it would behave in ways beyond anything that classical physics can predict.

Or so we’ve thought.

MIT scientists have now shown that certain mathematical ideas from everyday classical physics can be used to describe the often weird and nonintuitive behavior that occurs at the quantum, subatomic scale.

/* */