Toggle light / dark theme

Black holes are one of the greatest mysteries of the universe—for example, a black hole with the mass of our sun has a radius of only 3 kilometers. Black holes in orbit around each other emit gravitational radiation—oscillations of space and time predicted by Albert Einstein in 1916. This causes the orbit to become faster and tighter, and eventually, the black holes merge in a final burst of radiation. These gravitational waves propagate through the universe at the speed of light, and are detected by observatories in the U.S. (LIGO) and Italy (Virgo). Scientists compare the data collected by the observatories against theoretical predictions to estimate the properties of the source, including how large the black holes are and how fast they are spinning. Currently, this procedure takes at least hours, often months.

An interdisciplinary team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen and the Max Planck Institute for Gravitational Physics (Albert Einstein Institute/AEI) in Potsdam is using state-of-the-art machine learning methods to speed up this process. They developed an algorithm using a , a complex computer code built from a sequence of simpler operations, inspired by the human brain. Within seconds, the system infers all properties of the binary black-hole source. Their research results are published today in Physical Review Letters.

“Our method can make very accurate statements in a few seconds about how big and massive the two were that generated the gravitational waves when they merged. How fast do the black holes rotate, how far away are they from Earth and from which direction is the gravitational wave coming? We can deduce all this from the observed data and even make statements about the accuracy of this calculation,” explains Maximilian Dax, first author of the study Real-Time Gravitational Wave Science with Neural Posterior Estimation and Ph.D. student in the Empirical Inference Department at MPI-IS.

Alphabet’s AI research company DeepMind has released the next generation of its language model, and it says that it has close to the reading comprehension of a high schooler — a startling claim.

It says the language model, called Gopher, was able to significantly improve its reading comprehension by ingesting massive repositories of texts online.

DeepMind boasts that its algorithm, an “ultra-large language model,” has 280 billion parameters, which are a measure of size and complexity. That means it falls somewhere between OpenAI’s GPT-3 (175 billion parameters) and Microsoft and NVIDIA’s Megatron, which features 530 billion parameters, The Verge points out.

Researchers at Lawrence Berkeley National Laboratory’s Advanced Quantum Testbed (AQT) demonstrated that an experimental method known as randomized compiling (RC) can dramatically reduce error rates in quantum algorithms and lead to more accurate and stable quantum computations. No longer just a theoretical concept for quantum computing, the multidisciplinary team’s breakthrough experimental results are published in Physical Review X.

The experiments at AQT were performed on a four-qubit superconducting quantum processor. The researchers demonstrated that RC can suppress one of the most severe types of errors in quantum computers: coherent errors.

Akel Hashim, AQT researcher, involved in the experimental breakthrough and a graduate student at the University of California, Berkeley explained: “We can perform quantum computations in this era of noisy intermediate-scale quantum (NISQ) computing, but these are very noisy, prone to errors from many different sources, and don’t last very long due to the decoherence—that is, information loss—of our qubits.”

By Stina Andersson and Ellinor Wanzambi

Researchers have been working on quantum algorithms since physicists first proposed using principles of quantum physics to simulate nature decades. One important component in many quantum algorithms is quantum walks, which are the quantum equivalent of the classical Markov chain, i.e., a random walk without memory. Quantum walks are used in algorithms in areas such as searching, node ranking in networks, and element distinctness.

Consider the graph in Figure 1 and imagine that we randomly want to move between nodes A, B, C, and D in the graph. We can only move between nodes that are connected by an edge, and each edge has an associated probability that decides how likely we are to move to the connected node. This is a random walk. In this article, we are working only with Markov chains, also called the memory-less random walks, meaning that the probabilities are independent of the previous steps. For example, the probabilities of arriving at node A are the same no matter if we got there from node B or node D.

Quantum computers have the potential to solve important problems that are beyond reach even for the most powerful supercomputers, but they require an entirely new way of programming and creating algorithms.

Universities and major tech companies are spearheading research on how to develop these new algorithms. In a recent collaboration between University of Helsinki, Aalto University, University of Turku, and IBM Research Europe-Zurich, a team of researchers have developed a new method to speed up calculations on quantum computers. The results are published in the journal PRX Quantum of the American Physical Society.

“Unlike classical computers, which use bits to store ones and zeros, information is stored in the qubits of a quantum processor in the form of a , or a wavefunction,” says postdoctoral researcher Guillermo García-Pérez from the Department of Physics at the University of Helsinki, first author of the paper.

Recent theoretical breakthroughs have settled two long-standing questions about the viability of simulating quantum systems on future quantum computers, overcoming challenges from complexity analyses to enable more advanced algorithms. Featured in two publications, the work by a quantum team at Los Alamos National Laboratory shows that physical properties of quantum systems allow for faster simulation techniques.

“Algorithms based on this work will be needed for the first full-scale demonstration of quantum simulations on quantum computers,” said Rolando Somma, a quantum theorist at Los Alamos and coauthor on the two papers.

Most physicists and philosophers now agree that time is emergent while Digital Presentism denotes: Time emerges from complex qualia computing at the level of observer experiential reality. Time emerges from experiential data, it’s an epiphenomenon of consciousness. From moment to moment, you are co-writing your own story, co-producing your own “participatory reality” — your stream of consciousness is not subject to some kind of deterministic “script.” You are entitled to degrees of freedom. If we are to create high fidelity first-person simulated realities that also may be part of intersubjectivity-based Metaverse, then D-Theory of Time gives us a clear-cut guiding principle for doing just that.

Here’s Consciousness: Evolution of the Mind (2021) documentary, Part III: CONSCIOUSNESS & TIME #consciousness #evolution #mind #time #DTheoryofTime #DigitalPresentism #CyberneticTheoryofMind


Watch the full documentary on Vimeo on demand: https://vimeo.com/ondemand/339083

Summary: A new machine-learning algorithm could help practitioners identify autism in children more effectively.

Source: USC

For children with autism spectrum disorder (ASD), receiving an early diagnosis can make a huge difference in improving behavior, skills and language development. But despite being one of the most common developmental disabilities, impacting 1 in 54 children in the U.S., it’s not that easy to diagnose.

Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play… See more.


Games have a long history of serving as a benchmark for progress in.

Artificial intelligence. Recently, approaches using search and learning have.

Shown strong performance across a set of perfect information games, and.