Toggle light / dark theme

Simulation reveals emergence of jet from binary neutron star merger followed by black hole formation

Binary neutron star mergers, cosmic collisions between two very dense stellar remnants made up predominantly of neutrons, have been the topic of numerous astrophysics studies due to their fascinating underlying physics and their possible cosmological outcomes. Most previous studies aimed at simulating and better understanding these events relied on computational methods designed to solve Einstein’s equations of general relativity under extreme conditions, such as those that would be present during neutron star mergers.

Researchers at the Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Yukawa Institute for Theoretical Physics, Chiba University, and Toho University recently performed the longest simulation of binary neutron star mergers to date, utilizing a framework for modeling the interactions between magnetic fields, high-density matter and neutrinos, known as the neutrino-radiation magnetohydrodynamics (MHD) framework.

Their simulation, outlined in Physical Review Letters, reveals the emergence of a magnetically dominated jet from the , followed by the collapse of the binary neutron star system into a black hole.

Information Processing via Human Soft Tissue: Soft Tissue Reservoir Computing

Physical reservoir computing refers to the concept of using nonlinear physical systems as computational resources to achieve complex information processing. This approach exploits the intrinsic properties of physical systems such as their nonlinearity and memory to perform computational tasks. Soft biological tissues possess characteristics such as stress-strain nonlinearity and viscoelasticity that satisfy the requirements of physical reservoir computing. This study evaluates the potential of human soft biological tissues as physical reservoirs for information processing. Particularly, it determines the feasibility of using the inherent dynamics of human soft tissues as a physical reservoir to emulate nonlinear dynamic systems. In this concept, the deformation field within the muscle, which is obtained from ultrasound images, represented the state of the reservoir. The findings indicate that the dynamics of human soft tissue have a positive impact on the computational task of emulating nonlinear dynamic systems. Specifically, our system outperformed the simple LR model for the task. Simple LR models based on raw inputs, which do not account for the dynamics of soft tissue, fail to emulate the target dynamical system (relative error on the order of <inline-formula xmlns:mml=“http://www.w3.org/1998/Math/MathML” xmlns:xlink=“http://www.w3.org/1999/xlink”> <tex-math notation=“LaTeX”>$10^{-2}$ </tex-math></inline-formula>). By contrast, the emulation results obtained using our system closely approximated the target dynamics (relative error on the order of <inline-formula xmlns:mml=“http://www.w3.org/1998/Math/MathML” xmlns:xlink=“http://www.w3.org/1999/xlink”> <tex-math notation=“LaTeX”>$10^{-3}$ </tex-math></inline-formula>). These results suggest that the soft tissue dynamics contribute to the successful emulation of the nonlinear equation. This study suggests that human soft tissues can be used as a potential computational resource. Soft tissues are found throughout the human body. Therefore, if computational processing is delegated to biological tissues, it could lead to a distributed computation system for human-assisted devices.

Algorithm streamlines vascular system design for 3D printed hearts

There are more than 100,000 people on organ transplant lists in the U.S., some of whom will wait years to receive one—and some may not survive the wait. Even with a good match, there is a chance that a person’s body will reject the organ. To shorten waiting periods and reduce the possibility of rejection, researchers in regenerative medicine are developing methods to use a patient’s own cells to fabricate personalized hearts, kidneys, livers, and other organs on demand.

Ensuring that oxygen and nutrients can reach every part of a newly grown organ is an ongoing challenge. Researchers at Stanford have created new tools to design and 3D print the incredibly complex vascular trees needed to carry blood throughout an organ. Their platform, published June 12 in Science, generates designs that resemble what we actually see in the human body significantly faster than previous attempts and is able to translate those designs into instructions for a 3D printer.

“The ability to scale up bioprinted tissues is currently limited by the ability to generate vasculature for them—you can’t scale up these tissues without providing a ,” said Alison Marsden, the Douglas M. and Nola Leishman Professor of Cardiovascular Diseases, professor of pediatrics and of bioengineering at Stanford in the Schools of Engineering and Medicine and co-senior author on the paper. “We were able to make the algorithm for generating the vasculature run about 200 times faster than prior methods, and we can generate it for complex shapes, like organs.”

Understanding quantum computing’s most troubling problem—the barren plateau

For the past six years, Los Alamos National Laboratory has led the world in trying to understand one of the most frustrating barriers that faces variational quantum computing: the barren plateau.

“Imagine a landscape of peaks and valleys,” said Marco Cerezo, the Los Alamos team’s lead scientist. “When optimizing a variational, or parameterized, , one needs to tune a series of knobs that control the solution quality and move you in the landscape. Here, a peak represents a bad solution and a valley represents a good solution. But when researchers develop algorithms, they sometimes find their model has stalled and can neither climb nor descend. It’s stuck in this space we call a barren .”

For these quantum computing methods, barren plateaus can be mathematical dead ends, preventing their implementation in large-scale realistic problems. Scientists have spent a lot of time and resources developing quantum algorithms only to find that they sometimes inexplicably stall. Understanding when and why barren plateaus arise has been a problem that has taken the community years to solve.

Training robots without robots: Smart glasses capture first-person task demos

Over the past few decades, robots have gradually started making their way into various real-world settings, including some malls, airports and hospitals, as well as a few offices and households.

For robots to be deployed on a larger scale, serving as reliable everyday assistants, they should be able to complete a wide range of common manual tasks and chores, such as cleaning, washing the dishes, cooking and doing the laundry.

Training machine learning algorithms that allow robots to successfully complete these tasks can be challenging, as it often requires extensive annotated data and/or demonstration videos showing humans the tasks. Devising more effective methods to collect data to train robotics algorithms could thus be highly advantageous, as it could help to further broaden the capabilities of robots.

Animation technique simulates the motion of squishy objects

The technique simulates elastic objects for animation and other applications, with improved reliability compared to other methods. In comparison, many existing simulation techniques can produce elastic animations that become erratic or sluggish or can even break down entirely.

To achieve this improvement, the MIT researchers uncovered a hidden mathematical structure in equations that capture how elastic materials deform on a computer. By leveraging this property, known as convexity, they designed a method that consistently produces accurate, physically faithful simulations.

Quantum mechanics provide truly random numbers on demand

Randomness is incredibly useful. People often draw straws, throw dice or flip coins to make fair choices. Random numbers can enable auditors to make completely unbiased selections. Randomness is also key in security; if a password or code is an unguessable string of numbers, it’s harder to crack. Many of our cryptographic systems today use random number generators to produce secure keys.

But how do you know that a random number is truly random?

Classical computer algorithms can only create pseudorandom numbers, and someone with enough knowledge of the algorithm or the system could manipulate it or predict the next number. An expert in sleight of hand could rig a coin flip to guarantee a heads or tails result. Even the most careful coin flips can have bias; with enough study, their outcomes could be predicted.

‘Optical neural engine’ can solve partial differential equations

Partial differential equations (PDEs) are a class of mathematical problems that represent the interplay of multiple variables, and therefore have predictive power when it comes to complex physical systems. Solving these equations is a perpetual challenge, however, and current computational techniques for doing so are time-consuming and expensive.

Now, research from the University of Utah’s John and Marcia Price College of Engineering is showing a way to speed up this process: encoding those equations in light and feeding them into their newly designed “optical neural engine,” or ONE.

The researchers’ ONE combines diffractive optical neural networks and optical matrix multipliers. Rather than representing PDEs digitally, the researchers represented them optically, with variables represented by the various properties of a light wave, such as its intensity and phase. As a wave passes through the ONE’s series of optical components, those properties gradually shift and change, until they ultimately represent the solution to the given PDE.

AI-enabled control system helps autonomous drones stay on target in uncertain environments

An autonomous drone carrying water to help extinguish a wildfire in the Sierra Nevada might encounter swirling Santa Ana winds that threaten to push it off course. Rapidly adapting to these unknown disturbances inflight presents an enormous challenge for the drone’s flight control system.

To help such a stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its deviation from its intended trajectory in the face of unpredictable forces like gusty winds.

The study is published on the arXiv preprint server.