Toggle light / dark theme

Quantum computers differ fundamentally from classical ones. Instead of using bits (0s and 1s), they employ “qubits,” which can exist in multiple states simultaneously due to quantum phenomena like superposition and entanglement.

For a quantum computer to simulate dynamic processes or process data, among other essential tasks, it must translate complex input data into “quantum data” that it can understand. This process is known as quantum compilation.

Essentially, quantum compilation “programs” the quantum computer by converting a particular goal into an executable sequence. Just as the GPS app converts your desired destination into a sequence of actionable steps you can follow, quantum compilation translates a high-level goal into a precise sequence of quantum operations that the quantum computer can execute.

For centuries, gravity has been one of the most captivating and puzzling forces in the universe. Thanks to the groundbreaking work of Isaac Newton and Albert Einstein, we have a robust understanding of how gravity governs the behavior of planets, stars, and even galaxies. Yet, when we look at extreme scenarios, such as the intense gravitational fields near black holes or the mysterious quantum world, our understanding starts to break down. New research and theories, however, suggest that the key to solving these mysteries may finally be within reach.

In our daily lives, gravity is a constant presence. It’s what keeps us grounded to the Earth, dictates the orbits of planets, and ensures that satellites stay in orbit around our planet. Thanks to Einstein’s general theory of relativity, scientists have been able to make highly accurate predictions about the movement of celestial bodies, calculate tides, and even send probes to the farthest reaches of the solar system.

Yet, when gravity’s effects become more extreme—such as inside black holes or during the birth of the universe—it becomes much more difficult to model. Similarly, when we turn our attention to the quantum realm of subatomic particles, Einstein’s theory breaks down. To understand phenomena like the Big Bang or the inner workings of black holes, physicists have long known that we need a new, unified theory of gravity.

The semiconductor industry’s long held imperative—Moore’s Law, which dictates that transistor densities on a chip should double roughly every two years—is getting more and more difficult to maintain. The ability to shrink down transistors, and the interconnects between them, is hitting some basic physical limitations. In particular, when copper interconnects are scaled down, their resistivity skyrockets, which decreases how much information they can carry and increases their energy draw.

The industry has been looking for alternative interconnect materials to prolong the march of Moore’s Law a bit longer. Graphene is a very attractive optionin many ways: The sheet-thin carbon material offers excellent electrical and thermal conductivity, and is stronger than diamond.

However, researchers have struggled to incorporate graphene into mainstream computing applications for two main reasons. First, depositing graphene requires high temperatures that are incompatible with traditional CMOS manufacturing. And second, the charge carrier density of undoped, macroscopic graphene sheets is relatively low.


Making smaller transistors, and the interconnections between them, is getting near impossible. Copper interconnects get more resistive as they are scaled down, making them worse and slower at carrying information. Startup Destination 2D thinks graphene is the solution. They have a novel technique of growing graphene that is CMOS compatible, promising 100x current density improvement over copper.

Researchers have developed a device that can simultaneously measure six markers of brain health. The sensor, which is inserted through the skull into the brain, can pull off this feat thanks to an artificial intelligence (AI) system that pieces apart the six signals in real time.

Being able to continuously monitor biomarkers in patients with traumatic brain injury could improve outcomes by catching swelling or bleeding early enough for doctors to intervene. But most existing devices measure just one marker at a time. They also tend to be made with metal, so they can’t easily be used in combination with magnetic resonance imaging.


Simultaneous access to measurements could improve outcomes for brain injuries.

Black holes have long fascinated scientists, known for their ability to trap anything that crosses their event horizon. But what if there were a counterpart to black holes? Enter the white hole—a theoretical singularity where nothing can enter, but energy and matter are expelled with immense force.

First proposed in the 1970s, white holes are essentially black holes in reverse. They rely on the same equations of general relativity but with time flowing in the opposite direction. While a black hole pulls matter in and lets nothing escape, a white hole would repel matter, releasing high-energy radiation and light.

Despite their intriguing properties, white holes face significant scientific challenges. The laws of thermodynamics, particularly entropy, make it improbable for matter to move backward in time, as white holes would require. Additionally, introducing a singularity into the Universe without a preceding collapse defies current understanding of cosmic evolution.

Originally published on Towards AI.

AI hallucinations are a strange and sometimes worrying phenomenon. They happen when an AI, like ChatGPT, generates responses that sound real but are actually wrong or misleading. This issue is especially common in large language models (LLMs), the neural networks that drive these AI tools. They produce sentences that flow well and seem human, but without truly “understanding” the information they’re presenting. So, sometimes, they drift into fiction. For people or companies who rely on AI for correct information, these hallucinations can be a big problem — they break trust and sometimes lead to serious mistakes.

So, why do these models, which seem so advanced, get things so wrong? The reason isn’t only about bad data or training limitations; it goes deeper, into the way these systems are built. AI models operate on probabilities, not concrete understanding, so they occasionally guess — and guess wrong. Interestingly, there’s a historical parallel that helps explain this limitation. Back in 1931, a mathematician named Kurt Gödel made a groundbreaking discovery. He showed that every consistent mathematical system has boundaries — some truths can’t be proven within that system. His findings revealed that even the most rigorous systems have limits, things they just can’t handle.

Researchers have developed a new, fast, and rewritable method for DNA computing that promises smaller, more powerful computers.

This method mimics the sequential and simultaneous gene expression in living organisms and incorporates programmable DNA circuits with logic gates. The improved process places DNA on a solid glass surface, enhancing efficiency and reducing the need for manual transfers, culminating in a 90-minute reaction time in a single tube.

Advancements in DNA-Based Computation.