Toggle light / dark theme

Anthropic, an artificial intelligence company founded by exiles from OpenAI, has introduced the first AI model that can produce either conventional output or a controllable amount of reasoning needed to solve more grueling problems.

Anthropic says the new hybrid model, called Claude 3.7, will make it easier for users and developers to tackle problems that require a mix of instinctive output and step-by-step cogitation. The user has a lot of control over the behavior—how long it thinks, and can trade reasoning and intelligence with time and budget, says Michael Gerstenhaber, product lead, AI platform at Anthropic.

Claude 3.7 also features a new scratchpad that reveals the model’s reasoning process. A similar feature proved popular with the Chinese AI model DeepSeek. It can help a user understand how a model is working over a problem in order to modify or refine prompts.

Dianne Penn, product lead of research at Anthropic, says the scratchpad is even more helpful when combined with the ability to ratchet a model’s reasoning up and down. If, for example, the model struggles to break down a problem correctly, a user can ask it to spend more time working on it.

Frontier AI companies are increasingly focused on getting the models to reason over problems as a way to increase their capabilities and broaden their usefulness. OpenAI, the company that kicked off the current AI boom with ChatGPT, was the first to offer a reasoning AI model, called o1, in September 2024.

OpenAI has since introduced a more powerful version called o3, while rival Google has released a similar offering for its model Gemini, called Flash Thinking. In both cases, users have to switch between models to access the reasoning abilities—a key difference compared to Claude 3.7.

A robotic arm that moves with nothing but the power of thought—a concept that once seemed like pure science fiction is now at the heart of Neuralink’s latest breakthrough. The brain-chip company, founded by Elon Musk, has unveiled an ambitious project that aims to connect its neural implant, the N1, to an experimental robotic limb, potentially transforming the lives of people with paralysis.

Identifying and delineating cell structures in microscopy images is crucial for understanding the complex processes of life. This task is called “segmentation” and it enables a range of applications, such as analyzing the reaction of cells to drug treatments, or comparing cell structures in different genotypes.

It was already possible to carry out automatic segmentation of those biological structures, but the dedicated methods only worked in specific conditions and adapting them to new conditions was costly. An international research team led by Göttingen University has now developed a method for retraining the existing AI-based software Segment Anything on over 17,000 with over 2 million structures annotated by hand.

The new model is called Segment Anything for Microscopy and it can precisely segment images of tissues, cells and similar structures in a wide range of settings. To make it available to researchers and medical doctors, they have also created μSAM, user-friendly software to “segment anything” in microscopy images. The work is published in Nature Methods.

I wondered when this would start!

“This means that should everything go according to plan, the humanoid robot will eventually be put to work building itself.” 🤖 🤖


Apptronik, an Austin-based maker of humanoid robots, on Tuesday announced a new pilot partnership with American supply chain/manufacturing stalwart, Jabil. The deal arrives two weeks after Apptronik announced a $350 million Series A financing round aimed at scaling up production of its Apollo robot.

The Jabil deal is the second major pilot announced by Apptronik. It follows a March 2024 partnership that put Apollo to work on the Mercedes-Benz manufacturing floor. While the company tells TechCrunch that its partnership with the automaker is ongoing, it has yet to graduate beyond the pilot stage.

In addition to test running the humanoid robot on its factory floor, this new deal also finds Florida-based Jabil and Apptronik becoming manufacturing partners. Once Apollo is determined to be commercially viable, Jabil will begin producing the robot in its own factories. This means that should everything go according to plan, the humanoid robot will eventually be put to work building itself.

The prospect of consciousness in artificial systems is closely tied to the viability of functionalism about consciousness. Even if consciousness arises from the abstract functional relationships between the parts of a system, it does not follow that any digital system that implements the right functional organization would be conscious. Functionalism requires constraints on what it takes to properly implement an organization. Existing proposals for constraints on implementation relate to the integrity of the parts and states of the realizers of roles in a functional organization. This paper presents and motivates three novel integrity constraints on proper implementation not satisfied by current neural network models.

Join cognitive scientist and AI researcher Joscha Bach for an in-depth interview on the nature of consciousness, in which he argues that the brain is hardware, consciousness its software and that, in order to understand our reality, we must unlock the algorithms of consciousness.

Designing high performance, scalable, and energy efficient spiking neural networks remains a challenge. Here, the authors utilize mixed-dimensional dual-gated Gaussian heterojunction transistors from single-walled carbon nanotubes and monolayer MoS2 to realize simplified spiking neuron circuits.

So, to put it in a very straightforward way – the term “AI agents” refers to a specific application of agentic AI, and “agentic” refers to the AI models, algorithms and methods that make them work.

Why Is This Important?

AI agents and agentic AI are two closely related concepts that everyone needs to understand if they’re planning on using technology to make a difference in the coming years.

An AI-powered tool called MELD Graph is revolutionizing epilepsy care by detecting subtle brain abnormalities that radiologists often miss.

By analyzing global MRI data, the tool improves diagnosis speed, increases access to surgical treatment, and cuts healthcare costs. Though not yet in clinical use, it is already helping doctors identify operable lesions, offering hope to epilepsy patients worldwide.

AI Breakthrough in Epilepsy Detection.

A team of Carnegie Mellon University researchers set out to see how accurately large language models (LLMs) can match the style of text written by humans. Their findings were recently published in the Proceedings of the National Academy of Sciences.

“We humans, we adapt how we write and how we speak to the situation. Sometimes we’re formal or informal, or there are different styles for different contexts,” said Alex Reinhart, lead author and associate teaching professor in the Department of Statistics & Data Science.

“What we learned is that LLMs, like ChatGPT and Llama, write a certain way, and they don’t necessarily adapt to the . The context and their style are actually very distinctive from how humans normally write or speak in different contexts. Nobody has measured or quantified this in the way we were able to do.”