Toggle light / dark theme

Identifying and delineating cell structures in microscopy images is crucial for understanding the complex processes of life. This task is called “segmentation” and it enables a range of applications, such as analyzing the reaction of cells to drug treatments, or comparing cell structures in different genotypes.

It was already possible to carry out automatic segmentation of those biological structures, but the dedicated methods only worked in specific conditions and adapting them to new conditions was costly. An international research team led by Göttingen University has now developed a method for retraining the existing AI-based software Segment Anything on over 17,000 with over 2 million structures annotated by hand.

The new model is called Segment Anything for Microscopy and it can precisely segment images of tissues, cells and similar structures in a wide range of settings. To make it available to researchers and medical doctors, they have also created μSAM, user-friendly software to “segment anything” in microscopy images. The work is published in Nature Methods.

I wondered when this would start!

“This means that should everything go according to plan, the humanoid robot will eventually be put to work building itself.” 🤖 🤖


Apptronik, an Austin-based maker of humanoid robots, on Tuesday announced a new pilot partnership with American supply chain/manufacturing stalwart, Jabil. The deal arrives two weeks after Apptronik announced a $350 million Series A financing round aimed at scaling up production of its Apollo robot.

The Jabil deal is the second major pilot announced by Apptronik. It follows a March 2024 partnership that put Apollo to work on the Mercedes-Benz manufacturing floor. While the company tells TechCrunch that its partnership with the automaker is ongoing, it has yet to graduate beyond the pilot stage.

The prospect of consciousness in artificial systems is closely tied to the viability of functionalism about consciousness. Even if consciousness arises from the abstract functional relationships between the parts of a system, it does not follow that any digital system that implements the right functional organization would be conscious. Functionalism requires constraints on what it takes to properly implement an organization. Existing proposals for constraints on implementation relate to the integrity of the parts and states of the realizers of roles in a functional organization. This paper presents and motivates three novel integrity constraints on proper implementation not satisfied by current neural network models.

Join cognitive scientist and AI researcher Joscha Bach for an in-depth interview on the nature of consciousness, in which he argues that the brain is hardware, consciousness its software and that, in order to understand our reality, we must unlock the algorithms of consciousness.

Designing high performance, scalable, and energy efficient spiking neural networks remains a challenge. Here, the authors utilize mixed-dimensional dual-gated Gaussian heterojunction transistors from single-walled carbon nanotubes and monolayer MoS2 to realize simplified spiking neuron circuits.

So, to put it in a very straightforward way – the term “AI agents” refers to a specific application of agentic AI, and “agentic” refers to the AI models, algorithms and methods that make them work.

Why Is This Important?

AI agents and agentic AI are two closely related concepts that everyone needs to understand if they’re planning on using technology to make a difference in the coming years.

An AI-powered tool called MELD Graph is revolutionizing epilepsy care by detecting subtle brain abnormalities that radiologists often miss.

By analyzing global MRI data, the tool improves diagnosis speed, increases access to surgical treatment, and cuts healthcare costs. Though not yet in clinical use, it is already helping doctors identify operable lesions, offering hope to epilepsy patients worldwide.

AI Breakthrough in Epilepsy Detection.

A team of Carnegie Mellon University researchers set out to see how accurately large language models (LLMs) can match the style of text written by humans. Their findings were recently published in the Proceedings of the National Academy of Sciences.

“We humans, we adapt how we write and how we speak to the situation. Sometimes we’re formal or informal, or there are different styles for different contexts,” said Alex Reinhart, lead author and associate teaching professor in the Department of Statistics & Data Science.

“What we learned is that LLMs, like ChatGPT and Llama, write a certain way, and they don’t necessarily adapt to the . The context and their style are actually very distinctive from how humans normally write or speak in different contexts. Nobody has measured or quantified this in the way we were able to do.”