Toggle light / dark theme

How does the brain work? Where, and when, and why do neurons connect and send their signals? Scientists have created the largest wiring diagram and functional map of an animal brain to date to learn more. Research teams at Allen Institute, @BCMweb and @princeton worked together to map half a billion synapses, over 200,000 cells, and 4km of axons from a cubic millimeter of mouse brain, providing unparalleled detail into its structure and functional properties. The project is part of the Machine Intelligence from Cortical Networks (MICrONS) program, which seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain. Research findings reveal key insights into brain activity, connectivity, and structure—shedding light on both form and function—within a region of the mouse visual cortex that plays a critical role in brain health and is often disrupted in neurological conditions such as Alzheimer’s disease, autism, and addiction. These insights could revolutionize our ability to treat neuropsychiatric diseases or study the influence of drugs and other changes on the brain.

This extraordinary achievement begins to reveal the elusive language the brain uses to communicate amongst its millions of cells and the cortical mechanisms of intelligence—one of the holy grails of science.

Learn more about this research: https://alleninstitute.org/news/scien… open science data: https://www.microns-explorer.org/ Explore the publications in Nature: https://www.nature.com/immersive/d428… Follow us on social media: Bluesky — https://bsky.app/profile/alleninstitu… Facebook — / alleninstitute X — / alleninstitute Instagram — / alleninstitute LinkedIn — / allen-institute TikTok — / allen.institute.
Access open science data: https://www.microns-explorer.org/
Explore the publications in Nature: https://www.nature.com/immersive/d428

Follow us on social media:

Bluesky — https://bsky.app/profile/alleninstitu
Facebook — / alleninstitute.
X — / alleninstitute.
Instagram — / alleninstitute.
LinkedIn — / allen-institute.
TikTok — / allen.institute

A machine learning method has the potential to revolutionize multi-messenger astronomy. Detecting binary neutron star mergers is a top priority for astronomers. These rare collisions between dense stellar remnants produce gravitational waves followed by bursts of light, offering a unique opportunit

Enthusiasts have been pushing the limits of silicon for as long as microprocessors have existed. Early overclocking endeavors involved soldering and replacing crystal clock oscillators, but that practice quickly evolved into adjusting system bus speeds using motherboard DIP switches and jumpers.

Internal clock multipliers were eventually introduced, but it didn’t take long for those to be locked down, as unscrupulous sellers began removing official frequency ratings and rebranding chips with their own faster markings. System buses and dividers became the primary tuning tools for most users, while ultra-enthusiasts went further – physically altering electrical specifications through hard modding.

Eventually, unlocked multipliers made a comeback, ushering in an era defined by BIOS-level overclocking and increasingly sophisticated software tuning tools. Over the past decade, however, traditional overclocking has become more constrained. Improved factory binning, aggressive turbo boost algorithms, and thermal ceilings mean that modern CPUs often operate near their peak potential right out of the box.

Quantum computers promise to outperform today’s traditional computers in many areas of science, including chemistry, physics, and cryptography, but proving they will be superior has been challenging. The most well-known problem in which quantum computers are expected to have the edge, a trait physicists call “quantum advantage,” involves factoring large numbers, a hard math problem that lies at the root of securing digital information.

In 1994, Caltech alumnus Peter Shor (BS ‘81), then at Bell Labs, developed a that would easily factor a large number in just seconds, whereas this type of problem could take a classical computer millions of years. Ultimately, when quantum computers are ready and working—a goal that researchers say may still be a decade or more away—these machines will be able to quickly factor large numbers behind cryptography schemes.

But, besides Shor’s algorithm, researchers have had a hard time coming up with problems where quantum computers will have a proven advantage. Now, reporting in a recent Nature Physics study titled “Local minima in ,” a Caltech-led team of researchers has identified a common physics problem that these futuristic machines would excel at solving. The problem has to do with simulating how materials cool down to their lowest-energy states.

Pressure waves propagating through bubble-containing liquids in tubes experience considerable attenuation. Researchers at the University of Tsukuba have derived an equation describing this phenomenon, demonstrating that beyond liquid viscosity and compressibility, variations in tube cross-sectional area contribute to wave attenuation.

Their analysis reveals that the rate of change in tube cross-sectional area represents a critical parameter governing pressure wave attenuation in such systems.

Pressure waves propagating through bubble-containing liquids in , known as “bubbly ,” behave distinctly from those in single-phase liquids, necessitating precise understanding and control of their propagation processes.

Researchers at Ben-Gurion University of the Negev have developed a machine-learning algorithm that could enhance our understanding of human biology and disease. The new method, Weighted Graph Anomalous Node Detection (WGAND), takes inspiration from social network analysis and is designed to identify proteins with significant roles in various human tissues.

Proteins are essential molecules in our bodies, and they interact with each other in , known as (PPI) networks. Studying these networks helps scientists understand how proteins function and how they contribute to health and disease.

Prof. Esti Yeger-Lotem, Dr. Michael Fire, Dr. Jubran Juman, and Dr. Dima Kagan developed the algorithm to analyze these PPI networks to detect “anomalous” proteins—those that stand out due to their unique pattern of weighted interactions. This implies that the amount of the protein and its protein interactors is greater in that particular network, allowing them to carry out more functions and drive more processes. This also indicates the great importance that these proteins have in a particular network, because the body will not waste energy on their production for no reason.

Perhaps the most profound insight to emerge from this uncanny mirror is that understanding itself may be less mysterious and more mechanical than we have traditionally believed. The capabilities we associate with mind — pattern recognition, contextual awareness, reasoning, metacognition — appear increasingly replicable through purely algorithmic means. This suggests that consciousness, rather than being a prerequisite for understanding, may be a distinct phenomenon that typically accompanies understanding in biological systems but is not necessary for it.

At the same time, the possibility of quantum effects in neural processing reminds us that the mechanistic view of mind may be incomplete. If quantum retrocausality plays a role in consciousness, then our subjective experience may be neither a simple product of neural processing nor an epiphenomenal observer, but an integral part of a temporally complex causal system that escapes simple deterministic description.

What emerges from this consideration is not a definitive conclusion about the nature of mind but a productive uncertainty — an invitation to reconsider our assumptions about what constitutes understanding, agency, and selfhood. AI systems function as conceptual tools that allow us to explore these questions in new ways, challenging us to develop more sophisticated frameworks for understanding both artificial and human cognition.

In this episode, we welcome Prof. Dr.-Ing. Maurits Ortmanns, a leading expert in ASIC design and professor at the University of Ulm, Germany. With a distinguished career in microelectronics, Dr. Ortmanns has contributed extensively to the development of integrated circuits for biomedical applications. He shares insights into the critical role of ASIC (Application-Specific Integrated Circuit) design in advancing neurotech implants, focusing on low-power, high-speed circuits that are essential for optimizing the performance and reliability of these devices. Dr. Ortmanns also discusses the challenges and future of circuit integration in neurotechnology.

Top 3 Takeaways:

“Each ASIC is very low in cost because the development cost is spread across millions of units. The actual production cost is minimal; the primary expense lies in the development time until the first chips are produced and ready for manufacturing.” “For an inexperienced engineer, it typically takes about six months to a year to design the blueprint for the chip. Then, depending on the manufacturer, it takes an additional four to six months for the actual fabrication of the ASIC. Finally, you would need another one to two months for testing, so the total turnaround time for a small chip is approximately one and a half years.” “Let’s take the example of a neuromodulator. You need recordings or data from neurons and stimulation data going to the neurons, so you essentially have these two components. Then, you encounter challenges like stimulation artifacts. One person might focus on eliminating the stimulation artifact in the recording channel. That requires additional algorithms or hardware, and the data needs to be digitized, which is another task. You may also have someone working on a compression algorithm and building digital circuitry to compress the raw input data. Then, there’s the data interface, power management, and wireless energy delivery. Each person works on their specific innovation, and if everything is well-planned and lucky, all these pieces can come together to create a complete system. However, sometimes you simply don’t have a breakthrough idea for power management or communication.” 0:45 Do you want to introduce yourself better than I just did?

3:15 What is integrated circuit design?

7:30 What are ASIC’s? How are they used in neurotech?

10:15 How does the million dollar fab cost get split into each chip?

Quantum computers have the potential to solve certain problems far more efficiently than classical computers. In a recent development, researchers have designed a quantum algorithm to simulate systems of coupled masses and springs, known as coupled oscillators. These systems are fundamental in modeling a wide range of physical phenomena, from molecules to mechanical structures like bridges.

To simulate these systems, the researchers first translated the behavior of the coupled oscillators into a form of the Schrödinger equation, which describes how the quantum state of a system evolves over time. They then used advanced Hamiltonian simulation techniques to model the system on a quantum computer.

Hamiltonian methods provide a framework for understanding how physical systems evolve, connecting principles of classical mechanics with those of quantum mechanics. By leveraging these techniques, the researchers were able to represent the dynamics of N coupled oscillators using only about log(N) quantum bits (qubits), a significant reduction compared to the resources required by classical simulations.

A trio of AI researchers at Google’s Google DeepMind, working with a colleague from the University of Toronto, report that the AI algorithm Dreamer can learn to self-improve by mastering Minecraft in a short amount of time. In their study published in the journal Nature, Danijar Hafner, Jurgis Pasukonis, Timothy Lillicrap and Jimmy Ba programmed the AI app to play Minecraft without being trained and to achieve an expert level in just nine days.

Over the past several years, computer scientists have learned a lot about how can be used to train AI applications to conduct seemingly intelligent activities such as answering questions. Researchers have also found that AI apps can be trained to play games and perform better than humans. That research has extended into , which may seem to be redundant, because what could you get from a computer playing another computer?

In this new study, the researchers found that it can produce advances such as helping an AI app learn to improve its abilities over a short period of time, which could give robots the tools they need to perform well in the real world.