Toggle light / dark theme

APLab: On average a human being is capable of reading between 200 and 300 words per minute (wpm), while speed readers can achieve speeds of 400–700 WPM or higher

This pales into comparison to ChatGPT which can effectively read and analyze tens of thousands of words per second, since I process text computationally rather than linearly.

What if there was an invention or the concept of an invention that could be created to enhance a human beings capacity to read just as quickly as ChatGPT’s? I tasked ChatGPT with a step-by-step process to create that invention:

Here’s a concrete, neuroscience-grounded invention plan to push human reading toward “machine-speed”—while keeping comprehension and recall intact.

## 0) Core idea (one sentence)

Exploit the brain’s natural reading pipeline—VWFA → Wernicke (lexico-semantic) ↔ Broca (phonological sequencing) with eye-movement–driven coarse-to-fine vision—by timing text delivery to your saccade/ fixation cycle, suppressing unnecessary subvocalization, and entraining semantic parsing rhythms. ([PMC][1])

## 1) Hardware & sensing.

New AI framework can uncover space physics equations in raw data

Artificial intelligence (AI) systems, particularly artificial neural networks, have proved to be highly promising tools for uncovering patterns in large amounts of data that would otherwise be difficult to detect. Over the past decade, AI tools have been applied in a wide range of settings and fields.

Among its many possible applications, AI systems could be used to discover physical relationships and symbolic expressions (i.e., ) describing these relationships.

To uncover these formulas, physicists currently need to extensively analyze , thus automating this process could be highly advantageous.

Physicists Take the Imaginary Numbers Out of Quantum Mechanics

A century ago, the strange behavior of atoms and elementary particles led physicists to formulate a new theory of nature. That theory, quantum mechanics, found immediate success, proving its worth with accurate calculations of hydrogen’s emission and absorption of light. There was, however, a snag. The central equation of quantum mechanics featured the imaginary number i, the square root of −1.

Physicists knew i was a mathematical fiction. Real physical quantities like mass and momentum never yield a negative amount when squared. Yet this unreal number that behaves as i2 = −1 seemed to sit at the heart of the quantum world.

After deriving the i-riddled equation — essentially the law of motion for quantum entities — Erwin Schrödinger expressed the hope that it would be replaced by an entirely real version. (“There is undoubtedly a certain crudeness at the moment” in the equation’s form, he wrote in 1926.) Schrödinger’s distaste notwithstanding, i stuck around, and new generations of physicists took up his equation without much concern.

Introducing Nested Learning: A new ML paradigm for continual learning

The last decade has seen incredible progress in machine learning (ML), primarily driven by powerful neural network architectures and the algorithms used to train them. However, despite the success of large language models (LLMs), a few fundamental challenges persist, especially around continual learning, the ability for a model to actively acquire new knowledge and skills over time without forgetting old ones.

When it comes to continual learning and self-improvement, the human brain is the gold standard. It adapts through neuroplasticity — the remarkable capacity to change its structure in response to new experiences, memories, and learning. Without this ability, a person is limited to immediate context (like anterograde amnesia). We see a similar limitation in current LLMs: their knowledge is confined to either the immediate context of their input window or the static information that they learn during pre-training.

The simple approach, continually updating a model’s parameters with new data, often leads to “catastrophic forgetting” (CF), where learning new tasks sacrifices proficiency on old tasks. Researchers traditionally combat CF through architectural tweaks or better optimization rules. However, for too long, we have treated the model’s architecture (the network structure) and the optimization algorithm (the training rule) as two separate things, which prevents us from achieving a truly unified, efficient learning system.

Coordinating health AI to prevent defensive escalation

Artificial intelligence (AI) systems that can analyse medical images, records, and claims are becoming accessible to everyone. Although these systems outperform physicians at specific tasks, such as detecting cancer on CT scans, they are still imperfect. But as AI performance progresses from occasionally correct to reliably superior, there will be increasing pressure to conform to algorithmic outputs.

Physicists Just Ruled Out The Universe Being a Simulation

A question that has vexed physicists for the past century may finally have a solution – but perhaps not the one everyone was hoping for.

In a new, detailed breakdown of current theory, a team of physicists led by Mir Faizal of the University of British Columbia has shown that there is no universal “Theory of Everything” that neatly reconciles general relativity with quantum mechanics – at least, not an algorithmic one.

A natural consequence of this is that the Universe can’t be a simulation, since any such simulations would have to operate algorithmically.

Startup provides a nontechnical gateway to coding on quantum computers

Quantum computers have the potential to model new molecules and weather patterns better than any computer today. They may also one day accelerate artificial intelligence algorithms at a much lower energy footprint. But anyone interested in using quantum computers faces a steep learning curve that starts with getting access to quantum devices and then figuring out one of the many quantum software programs on the market.

Now qBraid, founded by Kanav Setia and Jason Necaise ‘20, is providing a gateway to quantum computing with a platform that gives users access to the leading and software. Users can log on to qBraid’s cloud-based interface and connect with quantum devices and other computing resources from leading companies like Nvidia, Microsoft, and IBM. In a few clicks, they can start coding or deploy cutting-edge software that works across devices.

“The mission is to take you from not knowing anything about quantum computing to running your first program on these amazing machines in less than 10 minutes,” Setia says. “We’re a one-stop platform that gives access to everything the quantum ecosystem has to offer. Our goal is to enable anyone—whether they’re enterprise customers, academics, or individual users—to build and ultimately deploy applications.”

Scientist Solves 100-Year-Old Physics Puzzle To Track Airborne Killers

Researchers at the University of Warwick have created a straightforward new way to predict how irregularly shaped nanoparticles, a harmful type of airborne pollutant, move through the air.

Each day, people inhale countless microscopic particles such as soot, dust, pollen, microplastics, viruses, and engineered nanoparticles. Many of these particles are so small that they can reach deep into the lungs and even pass into the bloodstream, where they may contribute to serious health problems including heart disease, stroke, and cancer.

While most airborne particles have uneven shapes, existing mathematical models often treat them as perfect spheres because that makes the equations easier to handle. This simplification limits scientists’ ability to accurately describe or track how real, non-spherical particles move, especially those that are more dangerous.

Rare side effects of antipsychotic medications provide new evidence for safer global prescribing

Patients with severe mental illnesses, such as schizophrenia and bipolar disorder, often require long-term use of antipsychotic medications. Some of these drugs, however, can pose potential risks, such as elevated prolactin levels and compromised immune function. Previous studies have relied mostly on small or single-center data, making it difficult to accurately assess the true incidence of rare adverse effects.

Researchers from the LKS Faculty of Medicine at the University of Hong Kong (HKUMed), through multidisciplinary collaboration and rigorous epidemiological methods, leveraged territory-wide data from the Hospital Authority to conduct two internationally impactful studies. The findings were published in the journals World Psychiatry and The Lancet Psychiatry. These discoveries provide solid evidence for drug regulation and and establish Hong Kong as a global leader in big data research on psychiatric safety.

Functionally dominant hotspot mutations of mitochondrial ribosomal RNA genes in cancer

To study selection for somatic single nucleotide variants (SNVs) in tumor mtDNA, we identified somatic mtDNA variants across primary tumors from the GEL cohort (n = 14,106). The sheer magnitude of the sample size in this dataset, in conjunction with the high coverage depth of mtDNA reads (mean = 15,919×), enabled high-confidence identification of mtDNA variants to tumor heteroplasmies of 5%. In total, we identified 18,104 SNVs and 2,222 indels (Supplementary Table 1), consistent with previously reported estimates of approximately one somatic mutation in every two tumors1,2,3. The identified mutations exhibited a strand-specific mutation signature, with a predominant occurrence of CT mutations on the heavy strand and TC on the light strand in the non-control region that was reversed in the control region2 (Extended Data Fig. 1a, b). These mutations occur largely independently of known nuclear driver mutations, with the exception of a co-occurrence of TP53 mutation and mtDNA mutations in breast cancer (Q = 0.031, odds ratio (OR) = 1.43, chi-squared test) (Extended Data Fig. 2a and Supplementary Table 4).

Although the landscape of hotspot mutations in nuclear-DNA-encoded genes is relatively well described, a lack of statistical power has impeded an analogous, comprehensive analysis in mtDNA16,17. To do so, we applied a hotspot detection algorithm that identified mtDNA loci demonstrating a mutation burden in excess of the expected background mutational processes in mtDNA (Methods). In total, we recovered 138 unique statistically significant SNV hotspots (Q 0.05) across 21 tumor lineages (Fig. 1a, b and Supplementary Table 2) and seven indel hotspots occurring at homopolymeric sites in complex I genes, as previously described by our group (Extended Data Fig. 2b and Supplementary Table 3). SNV hotspots affected diverse genetic elements, including protein-coding genes (n = 96 hotspots, 12 of 13 distinct genes), tRNA genes (n = 8 hotspots, 6 of 22 distinct genes) and rRNA genes (n = 34 hotspots, 2 of 2 genes) (Fig. 1b, c, e).

/* */