Toggle light / dark theme

In today’s AI news, believe it or not AI is alive and well, and it’s clearly going to change a lot of things forever. My personal epiphany happened just the other day, while I was “vibe coding” a personal software project. Those of us who have never written a line of code in our lives, but create software programs and applications using AI tools like Bolt or Lovable are called vibe coders.

S how these tools improve automation, multi-agent collaboration, and workflow orchestration for developers. Before we dig into what Then, Anthropic’s CEO Dario Amodei is worried that spies, likely from China, are getting their hands on costly “algorithmic secrets” from the U.S.’s top AI companies — and he wants the U.S. government to step in. Speaking at a Council on Foreign Relations event on Monday, Amodei said that China is known for its “large-scale industrial espionage” and that AI companies like Anthropic are almost certainly being targeted.

Meanwhile, despite all the hype, very few people have had a chance to use Manus. Currently, under 1% of the users on the wait list have received an invite code. It’s unclear how many people are on this list, but for a sense of how much interest there is, Manus’s Discord channel has more than 186,000 members. MIT Technology Review was able to obtain access to Manus, and they gave it a test-drive.

In videos, join Palantir CEO Alexander Karp with New York Times DealBook creator Andrew Ross Sorkin on the promises and peril of Silicon Valley, tech’s changing relationship with Washington, and what it means for our future — and his new book, The Technological Republic. Named “Best CEO of 2024” by The Economist, Alexander Karp is a vital player in Silicon Valley as the CEO of Palantir.

Then, Piers Linney, Co-founder of Implement AI, discusses how artificial intelligence and automation can be maximized across businesses on CNBC International Live. Linney says AI poses a threat to the highest income knowledge workers around the world.

Meanwhile, Nate B. Jones is back with some commentary on how OpenAI has launched a new API aimed at helping developers build AI agents, but its strategic impact remains unclear. While enterprises with strong LLM expertise are already using tools like LangChain effectively, smaller teams struggle with agent complexity. Nate says, despite being a high-quality API, it lacks a distinct differentiator beyond OpenAI’s own ecosystem.

We close out with, Celestial AI CEO Dave Lazovsky outlines how their “Photonic Fabric” technology helps to scale AI as the company raises $250 million in their latest funding round, valuing the company at $2.5 billion. Thats all for today, but AI is moving fast — subscribe.

The article presents an equation of state (EoS) for fluid and solid phases using artificial neural networks. This EoS accurately models thermophysical properties and predicts phaseions, including the critical and triple points. This approach offers a unified way to understand different states of matter.

A team from Princeton University has successfully used artificial intelligence (AI) to solve equations that control the quantum behavior of individual atoms and molecules to replicate the early stages of ice formation. The simulation shows how water molecules transition into solid ice with quantum accuracy.

Roberto Car, Princeton’s Ralph W. *31 Dornte Professor in Chemistry, who co-pioneered the approach of simulating molecular behaviors based on the underlying quantum laws more than 35 years ago, said, “In a sense, this is like a dream come true. Our hope then was that eventually, we would be able to study systems like this one. Still, it was impossible without further conceptual development, and that development came via a completely different field, that of artificial intelligence and data science.”

Modeling the early stages of freezing water, the ice nucleation process could increase the precision of climate and weather modeling and other processes like flash-freezing food. The new approach could help track the activity of hundreds of thousands of atoms over thousands of times longer periods, albeit still just fractions of a second, than in early studies.

A new study has been published in Nature Communications, presenting the first comprehensive atlas of allele-specific DNA methylation across 39 primary human cell types. The study was led by Ph.D. student Jonathan Rosenski under the guidance of Prof. Tommy Kaplan from the School of Computer Science and Engineering and Prof. Yuval Dor from the Faculty of Medicine at the Hebrew University of Jerusalem and Hadassah Medical Center.

Using machine learning algorithms and deep whole-genome bisulfite sequencing on freshly isolated and purified cell populations, the study unveils a detailed landscape of genetic and epigenetic regulation that could reshape our understanding of gene expression and disease.

A key focus of the research is the success in identifying differences between the two alleles and, in some cases, demonstrating that these differences result from —meaning that it is not the sequence (genetics) that matters, but rather whether the allele is inherited from the mother or the father. These findings could reshape our understanding of gene expression and disease.

However, as with much of quantum physics, this “language”—the interaction between spins—is extraordinarily complex. While it can be described mathematically, solving the equations exactly is nearly impossible, even for relatively simple chains of just a few spins. Not exactly ideal conditions for turning theory into reality…

A model becomes reality

Researchers at Empa’s nanotech@surfaces laboratory have now developed a method that allows many spins to “talk” to each other in a controlled manner – and that also enables the researchers to “listen” to them, i.e. to understand their interactions. Together with scientists from the International Iberian Nanotechnology Laboratory and the Technical University of Dresden, they were able to precisely create an archetypal chain of electron spins and measure its properties in detail. Their results have now been published in the renowned journal Nature Nanotechnology.

Physicists have long attempted to find a single theory that unites quantum mechanics and general relativity.

This has been very tricky because quantum mechanics focuses on the unpredictable nature of particles at microscopic scales, whereas general relativity explains gravity as the curvature of spacetime caused by massive objects.

The two theories discuss forces existing on different scales. Bianconi employed an interesting approach to deal with this challenge. She proposes an entropic action where, instead of being a fixed background, spacetime works like a quantum operator — acting on quantum states and deciding how they change over time.

NVIDIA may have just revolutionized computing forever with the launch of DIGITS, the world’s first personal AI supercomputer. By harnessing the power of GPU-accelerated deep learning—the same technology that drives top-tier high-performance computing (HPC) clusters—DIGITS shrinks massive supercomputing capabilities into a desktop-friendly system.

This compact yet powerful platform enables data scientists, researchers, and developers to rapidly train, test, and refine complex neural networks using NVIDIA’s state-of-the-art GPUs and software ecosystem. Built for deep learning, machine learning, and big data analytics, DIGITS seamlessly integrates tensor cores, parallel processing, and accelerated computing into a single, plug-and-play solution.

Researchers from the Department of Physics have managed to experimentally develop a new magnetic state: a magneto-ionic vortex or “vortion.” The research, published in Nature Communications, allows for an unprecedented level of control of magnetic properties at the nanoscale and at room temperature, and opens new horizons for the development of advanced magnetic devices.

The use of Big Data has multiplied the energy demand in information technologies. Generally, to store information, systems utilize electric currents to write data, which dissipates power by heating the devices. Controlling magnetic memories with voltage, instead of , can minimize this energy expenditure.

One way to achieve this is by using magneto-ionic materials, which allow for the manipulation of their magnetic properties by adding or removing ions through changes in the polarity of the applied voltage. So far, most studies in this area have focused on continuous films, rather than on controlling properties at the nanometric scale in discrete “bits,” essential for high-density data storage.

We introduce PokéChamp, a minimax agent powered by Large Language Models (LLMs) for Pokémon battles. Built on a general framework for two-player competitive games, PokéChamp leverages the generalist capabilities of LLMs to enhance minimax tree search. Specifically, LLMs replace three key modules: player action sampling, opponent modeling, and value function estimation, enabling the agent to effectively utilize gameplay history and human knowledge to reduce the search space and address partial observability. Notably, our framework requires no additional LLM training. We evaluate PokéChamp in the popular Gen 9 OU format. When powered by GPT-4o, it achieves a win rate of 76% against the best existing LLM-based bot and 84% against the strongest rule-based bot, demonstrating its superior performance. Even with an open-source 8-billion-parameter Llama 3.1 model, PokéChamp consistently outperforms the previous best LLM-based bot, Pokéllmon powered by GPT-4o, with a 64% win rate. PokéChamp attains a projected Elo of 1300–1500 on the Pokémon Showdown online ladder, placing it among the top 30%-10% of human players. In addition, this work compiles the largest real-player Pokémon battle dataset, featuring over 3 million games, including more than 500k high-Elo matches. Based on this dataset, we establish a series of battle benchmarks and puzzles to evaluate specific battling skills. We further provide key updates to the local game engine. We hope this work fosters further research that leverage Pokémon battle as benchmark to integrate LLM technologies with game-theoretic algorithms addressing general multiagent problems. Videos, code, and dataset available at this https URL.

Nowadays, if you have a microscope, you probably have a camera of some sort attached. [Applied Science] shows how you can add an array of tiny LEDs and some compute power to produce high-resolution images — higher than you can get with the microscope on its own. The idea is to illuminate each LED in the array individually and take a picture. Then, an algorithm constructs a higher-resolution image from the collected images. You can see the results and an explanation in the video below.

You’d think you could use this to enhance a cheap microscope, but the truth is you need a high-quality microscope to start with. In addition, color cameras may not be usable, so you may have to find or create a monochrome camera.

The code for the project is on GitHub. The LEDs need to be close to a point source, so smaller is better, and that determines what kind of LEDs are usable. Of course, the LEDs go through the sample, so this is suitable for transmissive microscopes, not metallurgical ones, at least in the current incarnation.