Toggle light / dark theme

Human Brain Project (HBP) researchers from Forschungszentrum Jülich and the University of Cologne (Germany) have uncovered how neuron densities are distributed across and within cortical areas in the mammalian brain. They have unveiled a fundamental organizational principle of cortical cytoarchitecture: the ubiquitous lognormal distribution of neuron densities.

Numbers of neurons and their play a crucial role in shaping the ’s structure and function. Yet, despite the wealth of available cytoarchitectonic data, the statistical distributions of neuron densities remain largely undescribed. The new HBP study, published in Cerebral Cortex, advances our understanding of the organization of mammalian brains.

The team based their investigations on nine publicly available datasets of seven species: mouse, marmoset, macaque, galago, owl monkey, baboon and human. After analyzing the cortical areas of each, they found that neuron densities within these areas follow a consistent pattern—a lognormal distribution. This suggests a fundamental organizational principle underlying the densities of neurons in the .

Get ready for a lot of math…!

We have sort of an intuitive understanding of a big need in artificial intelligence and machine learning, which has to do with making sure that systems converge well, and that data is oriented the right way. Also, that we understand what these tools are doing, that we can look under the hood.

A lot of us have already heard of the term “curse of dimensionality,” but Tomaso Armando Poggio invokes this frightening trope with a good bit of mathematics attached… (Poggio is the Eugene McDermott professor in the Department of Brain and Cognitive Sciences, a researcher at the McGovern Institute for Brain Research, and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL)

Sean Carroll is a theoretical physicist and philosopher who specializes in quantum mechanics, cosmology, and the philosophy of science. He is the Homewood Professor of Natural Philosophy at Johns Hopkins University and an external professor at the Sante Fe Institute. Sean has contributed prolifically to the public understanding of science through a variety of mediums: as an author of several physics books including Something Deeply Hidden and The Biggest Ideas in the Universe, as a public speaker and debater on a wide variety of scientific and philosophical subjects, and also as a host of his podcast Mindscape which covers topics spanning science, society, philosophy, culture, and the arts.

#physics #quantum #philosophy #mathematics.

http://www.patreon.com/timothynguyen.

In this episode, we take a deep dive into The Many Worlds (Everettian) Interpretation of quantum mechanics. While there are many philosophical discussions of the Many Worlds Interpretation available, ours marries philosophy with the technical, mathematical details. As a bonus, the whole gamut of topics from philosophy and physics arise, including the nature of reality, emergence, Bohmian mechanics, Bell’s Theorem, and more. We conclude with some analysis of Sean’s speculative work on the concept of emergent spacetime, a viewpoint which naturally arises from Many Worlds.

UC San Diego’s Q-MEEN-C is developing brain-like computers through mimicking neurons and synapses in quantum materials. Recent discoveries in non-local interactions represent a critical step towards more efficient AI hardware that could revolutionize artificial intelligence technology.

We often believe that computers are more efficient than humans. After all, computers can solve complex math equations in an instant and recall names that we might forget. However, human brains can process intricate layers of information rapidly, accurately, and with almost no energy input. Recognizing a face after seeing it only once or distinguishing a mountain from an ocean are examples of such tasks. These seemingly simple human functions require considerable processing and energy from computers, and even then, the results may vary in accuracy.

How close the measured value conforms to the correct value.

In science, the simplest explanations often hold the most truth, a concept known as “Occam’s Razor.” This principle has shaped scientific thought for centuries, but when dealing with abstract ideas, how do we evaluate them?

In a new paper, philosophers from UC Santa Barbara and UC Irvine discuss how to weigh the complexity of scientific theories by comparing their underlying mathematics. They aim to characterize the amount of structure a theory has using symmetry — or the aspects of an object that remain the same when other changes are made.

After much discussion, the authors ultimately doubt that symmetry will provide the framework they need. However, they do uncover why it’s such an excellent guide for understanding structure. Their paper appears in the journal Synthese.

A comprehensive new study provides evidence that various personality traits and cognitive abilities are connected. This means that if someone is good at a certain cognitive task, it can give hints about their personality traits, and vice versa.

For example, being skilled in math could indicate having a more open-minded approach to new ideas, but might also be associated with lower levels of politeness. These connections can help us understand why people are different in how they think and act.

The research has been published in the Proceedings of the National Academy of Sciences.

Though almost every cell in your body contains a copy of each of your genes, only a small fraction of these genes will be expressed, or turned on. These activations are controlled by specialized snippets of DNA called enhancers, which act like skillful on-off switches. This selective activation allows cells to adopt specific functions in the body, determining whether they become—for example—heart cells, muscle cells, or brain cells.

However, these don’t always turn on the right at the right time, contributing to the development of genetic diseases like cancer and diabetes. A team of Johns Hopkins biomedical engineers has developed a that can predict which enhancers play a role in normal development and disease—an innovation that could someday power the development of enhancer-targeted therapies to treat diseases by turning genes on and off at will. The study results appeared in Nature Genetics.

“We’ve known that enhancers control transitions between for a long time, but what is exciting about this work is that mathematical modeling is showing us how they might be controlled,” said study leader Michael Beer, a professor of biomedical engineering and genetic medicine at Johns Hopkins University.