Toggle light / dark theme

You’re running late at the airport and need to urgently access your account, only to be greeted by one of those frustrating tests—” Select all images with traffic lights” or “Type the letters you see in this box.” You squint, you guess, but somehow you’re wrong. You complete another test but still the site isn’t satisfied.

“Your flight is boarding now,” the tannoy announces as the website gives you yet another puzzle. You swear at the screen, close your laptop and rush towards the gate.

Now, here’s a thought to cheer you up: Bots are now solving these puzzles in milliseconds using artificial intelligence (AI). How ironic. The tools designed to prove we’re human are now obstructing us more than the machines they’re supposed to be keeping at bay.

This dynamic regulation capability of the device conductance creates favorable conditions for emulating synaptic plasticity. As shown in Fig. 2h, the consistent trend of device conductance evolution over 20 cycles indicates the good applicability of the ion migration working mode in emulating synaptic behavior. Furthermore, with the more elaborate design of excitation pulses for ion migration working mode, the device exhibits exceptional stability for over 120 cycles of LTP-LTD conductance changes (Supplementary Fig. 6b). Unlike stable macroscopic polarization, when the external electric field is smaller than the built-in electric field resulting from the ion migration, migrated Cu+ ions tend to spontaneously return to the origin lattice, leading the conductance to relax to its initial state22. As shown in Fig. 2i, a series of pulses was initially applied to induce Cu+ ion migration for varying low-resistance states of the device. Subsequently, a series of pulses with a period of 1.5 s, where 0 V is applied for 1.4 s and −0.6 V is applied for 0.1 s, were used to detect the evolution trend of the device conductance. As anticipated, the influence of migrated ions on the barrier noticeably decayed when no bias voltage was applied and eventually disappeared after 45 seconds. Notably, the effect of interface defects on the volatile state can be considered negligible (Supplementary Fig. 11).

Thus far, selective control of the device’s working mode has been achieved through precise pulse engineering. Short, high-amplitude pulses predominantly influence ferroelectric polarization, while longer, lower-amplitude pulses primarily drive ion migration. Beyond differences in on/off ratio and retention capabilities, the opposite shifts in the transfer curves provide further experimental evidence distinguishing them (Supplementary Fig. 12). In addition, the tunability of dual memristive mechanisms has also been verified in devices with varying CIPS thicknesses, as shown in Supplementary Fig. 13. The above results indicate that our device can work adaptably under the ferroelectric polarization and ion migration mode, respectively. And those memristive mechanisms with different retention characteristics all exhibit good applicability in emulating LTP-LTD characteristics. Overall, our device shows enormous promising potential in integrating neural reuse with memristor-based neural networks.

A refreshable memristor should possess two key properties: refresh and restoration. To confirm the refresh capability of the device, Fig. 3a illustrates the emulation of synaptic potentiation and depression behaviors achieved by cyclically configuring the device to ion migration mode, after every two induced ferroelectric polarizations. Across at least 4 cycles of the redeployment, the device exhibits consistent conductance changes in its respective working modes and well-defined independence between different working modes. This suggests that the motion extent of ferroionic can be precisely regulated solely by adjusting the amplitude and period of the excitation pulse, enabling the realization of both volatile and non-volatile working modes within a single device. Crucially, the experimental results demonstrate that redeployment does not compromise the performance in any given working mode of the device.

AI-powered data analysis tools have the potential to significantly improve the quality of scientific publications. A new study by Professor Mathias Christmann, a chemistry professor at Freie Universität Berlin, has uncovered shortcomings in chemical publications.

Using a Python script developed with the help of modern AI language models, Christmann analyzed more than 3,000 published in Organic Letters over the past two years. The analysis revealed that only 40% of the chemical research papers contained error-free mass measurements. The AI-based data analysis tool used for this purpose could be created without any prior programming knowledge.

“The results demonstrate how powerful AI-powered tools can be in everyday research. They not only make complex analyses accessible but also improve the reliability of scientific data,” explains Christmann.

Gould’s thesis has sparked widespread debate ever since, with some advocating for determinism and others supporting contingency. In his 1952 short story A Sound of Thunder, science fiction author Ray Bradbury recounted how a time traveler’s simple act of stepping on a butterfly in the age of the dinosaurs changed the course of the future. Gould made a similar point: “Alter any early event, ever so slightly and without apparent importance at the time, and evolution cascades into a radically different channel.”

Scientists have been exploring this problem through experiments designed to recreate evolution in the lab or in nature, or by comparing species that have emerged under similar conditions. Today, a new avenue has opened up: AI. In New York, a group of former researchers from Meta — the parent company of social networks Facebook, Instagram, and WhatsApp — founded EvolutionaryScale, an AI startup focused on biology. The EvolutionaryScale Model 3 (ESM3) system created by the company is a generative language model — the same kind of platform that powers ChatGPT. However, while ChatGPT generates text, ESM3 generates proteins, the fundamental building blocks of life.

ESM3 feeds on sequence, structure, and function data from existing proteins to learn the biological language of these molecules and create new ones. Its creators have trained it with 771 billion data packets derived from 3.15 billion sequences, 236 million structures, and 539 million functional traits. This adds up to more than one trillion teraflops (a measure of computational performance) — the most computing power ever used in biology, according to the company.

On a broader level, by pushing AI toward more human-like processing, Titans could mean AI that thinks more deeply than humans — challenging our understanding of human uniqueness and our role in an AI-augmented world.

At the heart of Titans’ design is a concerted effort to more closely emulate the functioning of the human brain. While previous models like Transformers introduced the concept of attention—allowing AI to focus on specific, relevant information—Titans takes this several steps further. The new architecture incorporates analogs to human cognitive processes, including short-term memory, long-term memory, and even the ability to “forget” less relevant information. Perhaps most intriguingly, Titans introduces a concept that’s surprisingly human: the ability to prioritize surprising or unexpected information. This mimics the human tendency to more easily remember events that violate our expectations, a feature that could lead to more nuanced and context-aware AI systems.

The key technical innovation in Titans is the introduction of a neural long-term memory module. This component learns to memorize historical context and works in tandem with the attention mechanisms that have become standard in modern AI models. The result is a system that can effectively utilize both immediate context (akin to short-term memory) and broader historical information (long-term memory) when processing data or generating responses.