Toggle light / dark theme

“Big machine learning models have to consume lots of power to crunch data and come out with the right parameters, whereas our model and training is so extremely simple that you could have systems learning on the fly,” said Robert Kent.


How can machine learning be improved to provide better efficiency in the future? This is what a recent study published in Nature Communications hopes to address as a team of researchers from The Ohio State University investigated the potential for controlling future machine learning products by creating digital twins (copies) that can be used to improve machine learning-based controllers that are currently being used in self-driving cars. However, these controllers require large amounts of computing power and are often challenging to use. This study holds the potential to help researchers better understand how future machine learning algorithms can exhibit better control and efficiency, thus improving their products.

“The problem with most machine learning-based controllers is that they use a lot of energy or power, and they take a long time to evaluate,” said Robert Kent, who is a graduate student in the Department of Physics at The Ohio State University and lead author of the study. “Developing traditional controllers for them has also been difficult because chaotic systems are extremely sensitive to small changes.”

For the study, the researchers created a fingertip-sized digital twin that can function without the internet with the goal of improving the productivity and capabilities of a machine learning-based controller. In the end, the researchers discovered a decrease in the controller’s power needs due to a machine learning method known as reservoir computing, which involves reading in data and mapping out to the target location. According to the researchers, this new method can be used to simplify complex systems, including self-driving cars while decreasing the amount of power and energy required to run the system.

Scientists have published the most detailed data set to date on the neural connections of the brain, which was obtained from a cubic millimeter of tissue sample.


A cubic millimeter of brain tissue may not sound like much. But considering that that tiny square contains 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses, all amounting to 1,400 terabytes of data, Harvard and Google researchers have just accomplished something stupendous.

Led by Jeff Lichtman, the Jeremy R. Knowles Professor of Molecular and Cellular Biology and newly appointed dean of science, the Harvard team helped create the largest 3D brain reconstruction to date, showing in vivid detail each cell and its web of connections in a piece of temporal cortex about half the size of a rice grain.

Published in Science, the study is the latest development in a nearly 10-year collaboration with scientists at Google Research, combining Lichtman’s electron microscopy imaging with AI algorithms to color-code and reconstruct the extremely complex wiring of mammal brains. The paper’s three first co-authors are former Harvard postdoc Alexander Shapson-Coe, Michał Januszewski of Google Research, and Harvard postdoc Daniel Berger.

A team led by Prof Frank Glorius from the Institute of Organic Chemistry at the University of Münster has developed an evolutionary algorithm that identifies the structures in a molecule that are particularly relevant for a respective question and uses them to encode the properties of the molecules for various machine-learning models.

Summary: Researchers created the largest 3D reconstruction of human brain tissue at synaptic resolution, capturing detailed images of a cubic millimeter of human temporal cortex. This tiny piece of brain contains 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses, which amounts to 1,400 terabytes of data.

This research is part of a broader effort to map an entire mouse brain’s neural wiring, with hopes of advancing our understanding of brain function and disease. The technology combines high-resolution electron microscopy and AI-powered algorithms to meticulously color-code and map out the complex neural connections.

Inspired by the tetromino shapes in the classic video game Tetris, researchers in the US have designed a simple radiation detector that can monitor radioactive sources both safely and efficiently. Created by Mingda Li and colleagues at the Massachusetts Institute of Technology, the device employs a machine learning algorithm to process data, allowing it to build up accurate maps of sources using just four detector pixels.

\r \r.

Wherever there is a risk of radioactive materials leaking into the environment, it is critical for site managers to map out radiation sources as accurately as possible.