Toggle light / dark theme

Topology helps build more robust photonic networks

Penn-led researchers have shown for the first time that multiple, information-carrying light signals can be safely guided through chip-based, reconfigurable networks using topology, the esoteric branch of mathematics that says donuts and mugs are identical. Because topological properties remain stable even when objects are deformed—hence the field equating mugs and donuts, since both have one opening—the advance could help make light-based technologies for computing and communications more powerful and reliable.

“We already knew how to guide light using topology,” says Liang Feng, Professor in Materials Science and Engineering (MSE) with a secondary appointment in Electrical and Systems Engineering (ESE) within Penn Engineering and senior author of a study in Nature Physics describing the result. “But we had never been able to guide multiple, concurrent signals before.”

That opens the door to building networks of chips that communicate using light while taking advantage of the robustness topology provides. “Signals guided by these principles can be extremely reliable,” says Feng. “It’s like building a highway for light where even large potholes have no effect on traffic—it’s as if the defects simply aren’t there.”

Pareto optimality reveals an atlas of cellular archetypes

This pattern is the signature of Pareto optimality, a mathematical concept describing how competing objectives create a “frontier” of optimal solutions. Just as you can’t make a car both maximally fast and maximally fuel-efficient without compromise, cells can’t simultaneously optimize all biological functions. A cell might specialize in energy production, defense, or growth—but rarely all three equally.


We hypothesized that the phenotypic variation within cell types is explained by multiobjective optimization and used Tabula Sapiens to test this hypothesis. The Tabula Sapiens Atlas v1 is a single-cell RNA sequencing dataset containing 456,101 high-quality single cell transcriptomes processed via droplet microfluidic emulsion, covering 58,870 genes across 174 cell types, 25 tissues, and 15 donors (16). We applied quality control filters to remove outlier cells on several metrics, yielding 309,193 cells across 173 cell types, 24 tissues, and 14 donors, SI Appendix, Fig. S1 and Table S1. Cell type abundance filters left 110 cell types across the same number of tissues and donors, yielding 440 distinct donor-tissue-cell type strata for analysis (15, 17).

The only assumption we make in this analysis is that fitness is an increasing function of performance (14). Then, if there is a trade-off in performing multiple tasks, optimal phenotypes (i.e., those that maximize fitness) must lie in a region described by convex combinations of points that each maximize a single task’s performance (14). This region is called the Pareto front. Any pruning mechanism that removes nonoptimal phenotypes would restrict observed phenotypes to the Pareto front; pruning is a pervasive strategy across biology, and there could be a host of pruning mechanisms in multicellular organisms.

This approach does not require any assumptions about underlying regulatory dynamics or interactions among units. The Pareto front simply describes the region of optimal phenotypes, and its vertices are phenotypes each optimal at some task. Etiology and underlying regulatory dynamics can shape the Pareto front, but do not contradict that optimal phenotypes must lie on it (18). The elegance and power of Pareto optimality are that no specific selection mechanism or regulatory dynamics are required to arrive at its conclusions.

How an acid found in grapes could help recycle battery metals

Cobalt and nickel are vital components for batteries, superalloys and catalysts, used in technologies ranging from smartphones to jet engines. But when it comes to recycling, they are notoriously difficult to separate because they are chemically nearly identical. To solve this, a team led by scientists at Johns Hopkins University in the United States has developed a cleaner and cheaper way to extract these elements. And it is thanks in part to grapes.

The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness

The core issue: computation isn’t an intrinsic physical process; it’s an extrinsic, descriptive map. It logically requires an active, experiencing cognitive agent, a “mapmaker”, to alphabetize continuous physics into meaningful, discrete symbols.


Computational functionalism dominates current debates on AI consciousness. This is the hypothesis that subjective experience emerges entirely from abstract causal topology, regardless of the underlying physical substrate. We argue this view fundamentally mischaracterizes how physics relates to information. We call this mistake the Abstraction Fallacy. Tracing the causal origins of abstraction reveals that symbolic computation is not an intrinsic physical process. Instead, it is a mapmaker-dependent description. It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states. Consequently, we do not need a complete, finalized theory of consciousness to assess AI sentience—a demand that simply pushes the question beyond near-term resolution and deepens the AI welfare trap. What we actually need is a rigorous ontology of computation. The framework proposed here explicitly separates simulation (behavioral mimicry driven by vehicle causality) from instantiation (intrinsic physical constitution driven by content causality). Establishing this ontological boundary shows why algorithmic symbol manipulation is structurally incapable of instantiating experience. Crucially, this argument does not rely on biological exclusivity. If an artificial system were ever conscious, it would be because of its specific physical constitution, never its syntactic architecture. Ultimately, this framework offers a physically grounded refutation of computational functionalism to resolve the current uncertainty surrounding AI consciousness.

Like 1 Recommend

Behavioral scientists found that people without children develop a relationship to mortality that is psychologically distinct. Without biological continuation

I’ve been thinking about death differently lately. Not in a morbid way, not in a crisis way. More like the way you start noticing a sound you’d been filtering out for years. A few months ago, I was having dinner near Tanjong Pagar with a woman I’ve known for about eight years, a 56-year-old consultant who runs a small but well-regarded advisory firm. She has no children. Never wanted them, she told me once, years ago, with the kind of calm clarity that made the topic feel settled. But that night, she said something that hasn’t settled at all. She said, “The hardest part of not having kids isn’t the loneliness people assume. It’s figuring out what your life means when there’s no one who carries it forward.”

She said it the way you’d describe a delayed train. Factual. Slightly inconvenient. Already accommodated.

That sentence has stayed with me. Because over nearly two decades of building companies across multiple countries, I’ve watched the question of legacy come up again and again in people’s lives, usually somewhere around their late forties or early fifties, and I’ve noticed something: the people who face it most directly, most honestly, are often the ones without children.


Without biological continuation, people who never have children are forced to build their own relationship with mortality from scratch, and the psychological architecture that requires turns out to be both more fragile and more deliberate than most of us assume.

Liquid-metal pupil helps an artificial eye adapt to sudden light changes

Computer vision technologies are artificial intelligence (AI)-powered systems that can capture, analyze, and interpret visual data captured from real-world environments. While these systems are now widely used, many of them perform poorly under some lighting conditions and when the light in captured scenes changes abruptly.

Researchers at University of North Carolina at Chapel Hill, Westlake University and other institutes have developed a new artificial eye that draws inspiration from the eyes of humans, cats and other animals. This artificial eye, introduced in a paper published in Science Robotics, could be used to advance the sensing capabilities of robots, advanced security systems and autonomous vehicles.

“Our project grew from a simple problem: traditional machine vision systems (like the cameras deployed in self-driving cars or robots) struggle with extreme light changes, such as changes from pitch black to bright sunlight,” Dr. Kun Liang, first author of the paper, told Tech Xplore.

Will self-driving ‘robot labs’ replace biologists? Paper sparks debate

I’d certainly like to see more experiments automated, yet I wonder if widespread automation would result in less resources directed to novel experimental designs (or new tools) that fall outside of automated workflows. Hopefully a balance can be attained!


AI-driven autonomous robots are coming to biology laboratories, but researchers insist that human skills remain essential.

Alibaba’s Qwen 3

QWEN 3.5 running on iPhone Pro in airplane mode. Full large language model running onan edge device with no network connectivity.


5 is now running fully on device on an iPhone 17 Pro, and that’s a big deal.

Despite its compact size, Qwen 3.5 reportedly outperforms models up to four times larger. It shows strong multimodal capability, meaning it can interpret and reason over images as well as text. It also includes a reasoning toggle, letting users switch between faster responses and deeper step by step thinking depending on the task.

The demo uses a 2B parameter model quantized to 6 bit precision, optimized with MLX for Apple Silicon. That combination allows advanced AI to run locally, without relying on cloud servers.

If this scales, it signals a shift toward powerful, private, on device AI that doesn’t need a data center to compete.

How an overlooked electrostatic force could drive the motor of the future

When we hear about moving objects with electricity, most of us imagine a “pulling force.” Positive and negative charges attract each other, drawing objects together. It is natural to think that this attractive force—known as electrostatic force—is what makes things move.

However, this force is not very strong, and it has not been suitable for driving large machines in our daily lives. For that reason, most practical motors rely on a different mechanism. For example, the motors in electric fans and electric vehicles do not use electricity directly to create motion. Instead, they use electricity to generate a magnetic field, and then use that magnetic force to rotate.

/* */