Toggle light / dark theme

Head to https://squarespace.com/artem to save 10% off your first purchase of a website or domain using code ARTEMKIRSANOV

Socials:
X/Twitter: https://twitter.com/ArtemKRSV
Patreon: / artemkirsanov.

My name is Artem, I’m a graduate student at NYU Center for Neural Science and researcher at Flatiron Institute.

In this video video we are exploring a fascinating paper which revealed the role of biological constraints on what patterns of neural dynamics the brain and cannot learn.
Link to the paper: https://www.nature.com/articles/s4159… small correction: I didn’t mention in this in the video, but the dimensionality-reduction process for BCI was two-stage. First, the source 90D neural activity was non-linearly projected to 10 using Factor analysis, and only after that 2D projections of this 10D space were shown as cursor positions. It doesn’t change the interpretation of the result, just wanted to be more technically correct about the methods. Outline: 00:00 Introduction 01:01 Temporal sequences 02:10 The experimental challenge 4:42 Biofeedback and BCIs as a research tool 7:30 Sponsor: Squarespace 8:44 Experimental setup 11:36 Two 2D projections of neural activity 12:53 Switching BCI mapping reveals activity constraints 14:46 Conclusion Icons by Freepik and Biorender Music by Artlist.

A small correction: I didn’t mention in this in the video, but the dimensionality-reduction process for BCI was two-stage. First, the source 90D neural activity was non-linearly projected to 10 using Factor analysis, and only after that 2D projections of this 10D space were shown as cursor positions. It doesn’t change the interpretation of the result, just wanted to be more technically correct about the methods.

Outline:

Augmented reality (AR) has become a hot topic in the entertainment, fashion, and makeup industries. Though a few different technologies exist in these fields, dynamic facial projection mapping (DFPM) is among the most sophisticated and visually stunning ones. Briefly put, DFPM consists of projecting dynamic visuals onto a person’s face in real-time, using advanced facial tracking to ensure projections adapt seamlessly to movements and expressions.

While imagination should ideally be the only thing limiting what’s possible with DFPM in AR, this approach is held back by technical challenges. Projecting visuals onto a moving face implies that the DFPM system can detect the user’s facial features, such as the eyes, nose, and mouth, within less than a millisecond.

Even slight delays in processing or minuscule misalignments between the camera’s and projector’s image coordinates can result in projection errors—or “misalignment artifacts”—that viewers can notice, ruining the immersion.

“ tabindex=”0” accuracy and scale, brings scientists closer to understanding how neurons connect and communicate.

Mapping Thousands of Synaptic Connections

Harvard researchers have successfully mapped and cataloged over 70,000 synaptic connections from approximately 2,000 rat neurons. They achieved this using a silicon chip capable of detecting small but significant synaptic signals from a large number of neurons simultaneously.

Summary: Researchers have developed a geometric deep learning approach to uncover shared brain activity patterns across individuals. The method, called MARBLE, learns dynamic motifs from neural recordings and identifies common strategies used by different brains to solve the same task.

Tested on macaques and rats, MARBLE accurately decoded neural activity linked to movement and navigation, outperforming other machine learning methods. The system works by mapping neural data into high-dimensional geometric spaces, enabling pattern recognition across individuals and conditions.

MIT researchers developed a new approach for assessing predictions with a spatial dimension, like forecasting weather or mapping air pollution.

Re relying on a weather app to predict next week’s temperature. How do you know you can trust its forecast? Scientists use statistical and physical models to make predictions about everything from weather to air pollution. But checking whether these models are truly reliable is trickier than it seems—especially when the locations where we have validation data don Traditional validation methods struggle with this problem, failing to provide consistent accuracy in real-world scenarios. In this work, researchers introduce a new validation approach designed to improve trust in spatial predictions. They define a key requirement: as more validation data becomes available, the accuracy of the validation method should improve indefinitely. They show that existing methods don’t always meet this standard. Instead, they propose an approach inspired by previous work on handling differences in data distributions (known as “covariate shift”) but adapted for spatial prediction. Their method not only meets their strict validation requirement but also outperforms existing techniques in both simulations and real-world data.

By refining how we validate predictive models, this work helps ensure that critical forecasts—like air pollution levels or extreme weather events—can be trusted with greater confidence.


A new evaluation method assesses the accuracy of spatial prediction techniques, outperforming traditional methods. This could help scientists make better predictions in areas like weather forecasting, climate research, public health, and ecological management.

Astronomer Calvin Leung was excited last summer to crunch data from a newly commissioned radio telescope to precisely pinpoint the origin of repeated bursts of intense radio waves—so-called fast radio bursts (FRBs)—emanating from somewhere in the northern constellation Ursa Minor.

Leung, a Miller Postdoctoral Fellowship recipient at the University of California, Berkeley, hopes eventually to understand the origins of these mysterious bursts and use them as probes to trace the large-scale structure of the universe, a key to its origin and evolution. He had written most of the computer code that allowed him and his colleagues to combine data from several telescopes to triangulate the position of a burst to within a hair’s width at arm’s length.

The excitement turned to perplexity when his collaborators on the Canadian Hydrogen Intensity Mapping Experiment (CHIME) turned optical telescopes on the spot and discovered that the source was in the distant outskirts of a long-dead elliptical galaxy that by all rights should not contain the kind of star thought to produce these bursts.

To see how cognitive maps form in the brain, researchers used a Janelia-designed, high-resolution microscope with a large field of view to image neural activity in thousands of neurons in the hippocampus of a mouse as it learned. Credit: Sun and Winnubst et al.

Our brains build maps of the environment that help us understand the world around us, allowing us to think, recall, and plan. These maps not only help us to, say, find our room on the correct floor of a hotel, but they also help us figure out if we’ve gotten off the elevator on the wrong floor.

Neuroscientists know a lot about the activity of neurons that make up these maps – like which cells fire when we’re in a particular location. But how the brain creates these maps as we learn remains a mystery.

We explore numerically the complex dynamics of multilayer networks (consisting of three and one hundred layers) of cubic maps in the presence of noise-modulated interlayer coupling (multiplexing noise). The coupling strength is defined by independent discrete-time sources of color Gaussian noise. Uncoupled layers can demonstrate different complex structures, such as double-well chimeras, coherent and spatially incoherent regimes. Regions of partial synchronization of these structures are identified in the presence of multiplexing noise. We elucidate how synchronization of a three-layer network depends on the initially observed structures in the layers and construct synchronization regions in the plane of multiplexing noise parameters “noise spectrum width – noise intensity”

The default mode network (DMN) is a set of interconnected brain regions known to be most active when humans are awake but not engaged in physical activities, such as relaxing, resting or daydreaming. This brain network has been found to support a variety of mental functions, including introspection, memories of past experiences and the ability to understand others (i.e., social cognitions).

The DMN includes four main brain regions: the (mPFC), the (PCC), the angular gyrus and the hippocampus. While several studies have explored the function of this network, its anatomical structure and contribution to information processing are not fully understood.

Researchers at McGill University, Forschungszentrum Jülich and other institutes recently carried out a study aimed at better understanding the anatomy of the DMN, specifically examining the organization of neurons in the tissue of its connected brain regions, which is known as cytoarchitecture. Their findings, published in Nature Neuroscience, offer new indications that the DMN has a widespread influence on the human brain and its associated cognitive (i.e., mental) functions.

Get a Wonderful Person Tee: https://teespring.com/stores/whatdamath.
More cool designs are on Amazon: https://amzn.to/3QFIrFX
Alternatively, PayPal donations can be sent here: http://paypal.me/whatdamath.

Hello and welcome! My name is Anton and in this video, we will talk about the discovery of the most massive superstructure in the nearby universe — Quipu.
https://arxiv.org/abs/2501.19236
Bohringer et al., Astronomy and Astrophysics, 2025
https://en.wikipedia.org/wiki/Sachs%E2%80%93Wolfe_effect.
Similar videos:





https://youtu.be/wp8zHG1g7bc.
#quipu #superstructure #cosmos.

0:00 Largest superstructure in the universe — Quipu.
0:45 Laniakea discovery of 2014
1:25 Shapley concentration.
2:35 Cosmological issues: Hubble Tension and S8 tension.
3:45 New study mapping galaxies and the discovery.
5:15 Additional findings and implications.
6:25 What is this though?
7:20 Confirming predictions and how this was found.
8:40 What’s next?

Support this channel on Patreon to help me make this a full time job:
https://www.patreon.com/whatdamath.

Bitcoin/Ethereum to spare? Donate them here to help this channel grow!
bc1qnkl3nk0zt7w0xzrgur9pnkcduj7a3xxllcn7d4
or ETH: 0x60f088B10b03115405d313f964BeA93eF0Bd3DbF

Space Engine is available for free here: http://spaceengine.org.