Toggle light / dark theme

Searchable tool reveals more than 90,000 known materials with electronic properties that remain unperturbed in the face of disruption.

What will it take for our electronics to become smarter, faster, and more resilient? One idea is to build them out of topological materials.

Topology stems from a branch of mathematics that studies shapes that can be manipulated or deformed without losing certain essential properties. A donut is a common example: If it were made of rubber, a donut could be twisted and squeezed into a completely new shape, such as a coffee mug, while retaining a key trait — namely, its center hole, which takes the form of the cup’s handle. The hole, in this case, is a topological trait, robust against certain deformations.

It’s said that the clock is always ticking, but there’s a chance that it isn’t. The theory of “presentism” states that the current moment is the only thing that’s real, while “eternalism” is the belief that all existence in time is equally real. Find out if the future is really out there and predictable—just don’t tell us who wins the big game next year.

This video is episode two from the series “Mysteries of Modern Physics: Time”, Presented by Sean Carroll.
Learn more about the physics of time at https://www.wondrium.com/YouTube.

00:00 Science and Philosophy Combine When Studying Time.
2:30 Experiments Prove Continuity of Time.
6:47 Time Is Somewhat Predictable.
8:10 Why We Think of Time Differently.
8:49 Our Perception of Time Leads to Spacetime.
11:54 We Dissect Presentism vs Eternalism.
15:43 Memories and Items From the Past Make it More Real.
17:47 Galileo Discovers Pendulum Speeds Are Identical.
25:00 Thought Experiment: “What if Time Stopped?”
29:07 Time Connects Us With the Outside World.

Welcome to Wondrium on YouTube.

Here, you can enjoy a carefully curated selection of the history, science, and math videos you’ve come to know and love from brands like The Great Courses, and more.

If you’ve ever wanted to travel back in time, wondered about the science of life, wished for a better understanding of math, or dreamt of exploring the stars … then Wondrium will be your new favorite channel on YouTube!

The human brain is often described in the language of tipping points: It toes a careful line between high and low activity, between dense and sparse networks, between order and disorder. Now, by analyzing firing patterns from a record number of neurons, researchers have uncovered yet another tipping point — this time, in the neural code, the mathematical relationship between incoming sensory information and the brain’s neural representation of that information. Their findings, published in Nature in June, suggest that the brain strikes a balance between encoding as much information as possible and responding flexibly to noise, which allows it to prioritize the most significant features of a stimulus rather than endlessly cataloging smaller details. The way it accomplishes this feat could offer fresh insights into how artificial intelligence systems might work, too.

A balancing act is not what the scientists initially set out to find. Their work began with a simpler question: Does the visual cortex represent various stimuli with many different response patterns, or does it use similar patterns over and over again? Researchers refer to the neural activity in the latter scenario as low-dimensional: The neural code associated with it would have a very limited vocabulary, but it would also be resilient to small perturbations in sensory inputs. Imagine a one-dimensional code in which a stimulus is simply represented as either good or bad. The amount of firing by individual neurons might vary with the input, but the neurons as a population would be highly correlated, their firing patterns always either increasing or decreasing together in the same overall arrangement. Even if some neurons misfired, a stimulus would most likely still get correctly labeled.

At the other extreme, high-dimensional neural activity is far less correlated. Since information can be graphed or distributed across many dimensions, not just along a few axes like “good-bad,” the system can encode far more detail about a stimulus. The trade-off is that there’s less redundancy in such a system — you can’t deduce the overall state from any individual value — which makes it easier for the system to get thrown off.

Physicists sometimes come up with crazy stories that sound like science fiction. Some turn out to be true, like how the curvature of space and time described by Einstein was eventually borne out by astronomical measurements. Others linger on as mere possibilities or mathematical curiosities.

In a new paper in Physical Review Research, JQI Fellow Victor Galitski and JQI graduate student Alireza Parhizkar have explored the imaginative possibility that our reality is only one half of a pair of interacting worlds. Their may provide a new perspective for looking at fundamental features of reality—including why our universe expands the way it does and how that relates to the most miniscule lengths allowed in quantum mechanics. These topics are crucial to understanding our universe and are part of one of the great mysteries of modern .

The pair of scientists stumbled upon this new perspective when they were looking into research on sheets of graphene—single atomic layers of carbon in a repeating . They realized that experiments on the electrical properties of stacked sheets of graphene produced results that looked like little universes and that the underlying phenomenon might generalize to other areas of physics. In stacks of graphene, new electrical behaviors arise from interactions between the individual sheets, so maybe unique physics could similarly emerge from interacting layers elsewhere—perhaps in cosmological theories about the entire universe.

Circa 2015 o.o!


The publication of Green and Schwarz’s paper “was 30 years ago this month,” the string theorist and popular-science author Brian Greene wrote in Smithsonian Magazine in January, “making the moment ripe for taking stock: Is string theory revealing reality’s deep laws? Or, as some detractors have claimed, is it a mathematical mirage that has sidetracked a generation of physicists?” Greene had no answer, expressing doubt that string theory will “confront data” in his lifetime.

Recently, however, some string theorists have started developing a new tactic that gives them hope of someday answering these questions. Lacking traditional tests, they are seeking validation of string theory by a different route. Using a strange mathematical dictionary that translates between laws of gravity and those of quantum mechanics, the researchers have identified properties called “consistency conditions” that they say any theory combining quantum mechanics and gravity must meet. And in certain highly simplified imaginary worlds, they claim to have found evidence that the only consistent theories of “quantum gravity” involve strings.

According to many researchers, the work provides weak but concrete support for the decades-old suspicion that string theory may be the only mathematically consistent theory of quantum gravity capable of reproducing gravity’s known form on the scale of galaxies, stars and planets, as captured by Albert Einstein’s theory of general relativity. And if string theory is the only possible approach, then its proponents say it must be true — with or without physical evidence. String theory, by this account, is “the only game in town.”

The physics of the microrealm involves two famous and bizarre concepts: The first is that prior to observation, it is impossible to know with certainty the outcome of a measurement on a particle; rather the particle exists in a “superposition” encompassing multiple mutually exclusive states. So a particle can be in two or more places at the same time, and you can only calculate the probability of finding it in a certain location when you look. The second involves “entanglement,” the spooky link that can unite two objects, no matter how far they are separated. Both superposition and entanglement are described mathematically by quantum theory. But many physicists believe that the ultimate theory of reality may lie beyond quantum theory. Now, a team of physicists and mathematicians has discovered a new connection between these two weird properties that does not assume that quantum theory is correct. Their study appears in Physical Review Letters.

“We were really excited to find this new connection that goes beyond quantum theory because the connection will be valid even for more exotic theories that are yet to be discovered,” says Ludovico Lami, a member of the physics think-tank, the Foundational Questions Institute, FQXi, and a physicist at the University of Ulm, in Germany. “This is also important because it is independent of the mathematical formalism of quantum theory and uses only notions with an immediate operational interpretation,” he adds. Lami co-authored the study with Guillaume Aubrun of Claude Bernard University Lyon 1, in France, Carlos Palazuelos, of the Complutense University of Madrid, in Spain, and Martin Plávala, of Siegen University, in Germany.

While quantum theory has proven to be supremely successful since its development a century ago, physicists have struggled to unify it with gravity to create one overarching “theory of everything.” This suggests that quantum theory may not be the final word on describing reality, inspiring physicists to hunt for a more fundamental framework. But any such ultimate theory must still incorporate superposition, entanglement, and the probabilistic nature of reality, since these features have been confirmed time and again in lab tests. The interpretation of these experiments does not depend on quantum theory being correct, notes Lami.

What explains the dramatic progress from 20th-century to 21st-century AI, and how can the remaining limitations of current AI be overcome? The widely accepted narrative attributes this progress to massive increases in the quantity of computational and data resources available to support statistical learning in deep artificial neural networks. We show that an additional crucial factor is the development of a new type of computation. Neurocompositional computing adopts two principles that must be simultaneously respected to enable human-level cognition: the principles of Compositionality and Continuity. These have seemed irreconcilable until the recent mathematical discovery that compositionality can be realized not only through discrete methods of symbolic computing, but also through novel forms of continuous neural computing.