Toggle light / dark theme

As demand grows for more powerful and efficient microelectronics systems, industry is turning to 3D integration—stacking chips on top of each other. This vertically layered architecture could allow high-performance processors, like those used for artificial intelligence, to be packaged closely with other highly specialized chips for communication or imaging. But technologists everywhere face a major challenge: how to prevent these stacks from overheating.

Now, MIT Lincoln Laboratory has developed a specialized chip to test and validate cooling solutions for packaged chip stacks. The chip dissipates extremely , mimicking high-performance logic chips, to generate heat through the silicon layer and in localized . Then, as cooling technologies are applied to the packaged stack, the chip measures temperature changes. When sandwiched in a stack, the chip will allow researchers to study how heat moves through stack layers and benchmark progress in keeping them cool.

“If you have just a , you can cool it from above or below. But if you start stacking several chips on top of each other, the heat has nowhere to escape. No cooling methods exist today that allow industry to stack multiples of these really high-performance chips,” says Chenson Chen, who led the development of the chip with Ryan Keech, both of the laboratory’s Advanced Materials and Microsystems Group.

Future drops, freebies & digital gifts – Don’t miss out! Join here:
https://docs.google.com/forms/d/e/1FAIpQLSdeuqNNvEjhpL_PQrF4…usp=dialog.

Ray Kurzweil, one of the world’s leading futurists, has made hundreds of predictions about technology’s future. From portable devices and wireless internet to brain-computer interfaces and nanobots in our bloodstream, Kurzweil has envisioned a future that sometimes feels like science fiction—but much of it is becoming reality.

In this video, we explore 7 of Ray Kurzweil’s boldest predictions:

00:00 — 01:44 Intro.

01:44 — 02:42 Prediction 1: Portable Devices and Wireless Internet.

02:42 — 03:34 Prediction 2: Self-Driving Cars by Early 2020s.

Background/Objectives: Accurately predicting protein–ligand binding affinity is essential in drug discovery for identifying effective compounds. While existing sequence-based machine learning models for binding affinity prediction have shown potential, they lack accuracy and robustness in pattern recognition, which limits their generalizability across diverse and novel binding complexes. To overcome these limitations, we developed GNNSeq, a novel hybrid machine learning model that integrates a Graph Neural Network (GNN) with Random Forest (RF) and XGBoost. Methods: GNNSeq predicts ligand binding affinity by extracting molecular characteristics and sequence patterns from protein and ligand sequences. The fully optimized GNNSeq model was trained and tested on subsets of the PDBbind dataset. The novelty of GNNSeq lies in its exclusive reliance on sequence features, a hybrid GNN framework, and an optimized kernel-based context-switching design. By relying exclusively on sequence features, GNNSeq eliminates the need for pre-docked complexes or high-quality structural data, allowing for accurate binding affinity predictions even when interaction-based or structural information is unavailable. The integration of GNN, XGBoost, and RF improves GNNSeq performance by hierarchical sequence learning, handling complex feature interactions, reducing variance, and forming a robust ensemble that improves predictions and mitigates overfitting. The GNNSeq unique kernel-based context switching scheme optimizes model efficiency and runtime, dynamically adjusts feature weighting between sequence and basic structural information, and improves predictive accuracy and model generalization. Results: In benchmarking, GNNSeq performed comparably to several existing sequence-based models and achieved a Pearson correlation coefficient (PCC) of 0.784 on the PDBbind v.2020 refined set and 0.84 on the PDBbind v.2016 core set. During external validation with the DUDE-Z v.2023.06.20 dataset, GNNSeq attained an average area under the curve (AUC) of 0.74, demonstrating its ability to distinguish active ligands from decoys across diverse ligand–receptor pairs. To further evaluate its performance, we combined GNNSeq with two additional specialized models that integrate structural and protein–ligand interaction features. When tested on a curated set of well-characterized drug–target complexes, the hybrid models achieved an average PCC of 0.89, with the top-performing model reaching a PCC of 0.97. GNNSeq was designed with a strong emphasis on computational efficiency, training on 5000+ complexes in 1 h and 32 min, with real-time affinity predictions for test complexes. Conclusions: GNNSeq provides an efficient and scalable approach for binding affinity prediction, offering improved accuracy and generalizability while enabling large-scale virtual screening and cost-effective hit identification. GNNSeq is publicly available in a server-based graphical user interface (GUI) format.

Researchers have created a light-powered soft robot that can carry loads through the air along established tracks, similar to cable cars or aerial trams. The soft robot operates autonomously, can climb slopes at angles of up to 80 degrees, and can carry loads up to 12 times its weight.

“We’ve previously created soft robots that can move quickly through the water and across solid ground, but wanted to explore a design that can carry objects through the air across open space,” says Jie Yin, associate professor of mechanical and aerospace engineering at North Carolina State University and corresponding author of a paper on the work published in Advanced Science.

“The simplest way to do this is to follow an established track—similar to the aerial trams you see in the mountains. And we’ve now demonstrated that this is possible.”

A research team from the Skoltech AI Center proposed a new neural network architecture for generating structured curved coordinate grids, an important tool for calculations in physics, biology, and even finance. The study is published in the Scientific Reports journal.

“Building a coordinate grid is a key task for modeling. Breaking down a complex space into manageable pieces is necessary, as it allows you to accurately determine the changes in different quantities—temperature, speed, pressure, and so on,” commented the lead author of the paper, Bari Khairullin, a Ph.D. student from the Computational and Data Science and Engineering program at Skoltech.

“Without a good grid, calculations become either inaccurate or impossible. In physics, they help model the movement of liquids and gases, in biology, tissue growth and drug distribution, and in finance, they predict market fluctuations. The proposed approach opens up new possibilities in building grids using artificial intelligence.”

At a time when we run ourselves ragged to meet society’s expectations of productivity, performance and time optimization, is it right that our robot vacuum cleaners and other smart appliances should sit idle for most of the day?

Computer scientists at the University of Bath in the UK think not. In a new paper, they propose over 100 ways to tap into the latent potential of our robotic devices. The researchers say these devices could be reprogrammed to perform helpful tasks around the home beyond their primary functions, keeping them physically active during their regular downtime.

New functions could include playing with the cat, watering plants, carrying groceries from car to kitchen, delivering breakfast in bed and closing windows when it rains.