Toggle light / dark theme

Using machine learning, a team of researchers in Canada has created ultrahigh-strength carbon nanolattices, resulting in a material that’s as strong as carbon steel, but only as dense as Styrofoam.

The team noted last month that it was the first time this branch of AI had been used to optimize nano-architected materials. University of Toronto’s Peter Serles, one of the authors of the paper describing this work in Advanced Materials, praised the approach, saying, “It didn’t just replicate successful geometries from the training data; it learned from what changes to the shapes worked and what didn’t, enabling it to predict entirely new lattice geometries.”

To quickly recap, nanomaterials are engineered by arranging atoms or molecules in precise patterns, much like constructing structures with extremely tiny LEGO blocks. These materials often exhibit unique properties due to their nanoscale dimensions.

Molecular Dynamics (MD) simulation serves as a crucial technique across various disciplines including biology, chemistry, and material science1,2,3,4. MD simulations are typically based on interatomic potential functions that characterize the potential energy surface of the system, with atomic forces derived as the negative gradients of the potential energies. Subsequently, Newton’s laws of motion are applied to simulate the dynamic trajectories of the atoms. In ab initio MD simulations5, the energies and forces are accurately determined by solving the equations in quantum mechanics. However, the computational demands of ab initio MD limit its practicality in many scenarios. By learning from ab initio calculations, machine learning interatomic potentials (MLIPs) have been developed to achieve much more efficient MD simulations with ab initio-level accuracy6,7,8.

Despite their successes, the crucial challenge of implementing MLIPs is the distribution shift between training and test data. When using MLIPs for MD simulations, the data for inference are atomic structures that are continuously generated during simulations based on the predicted forces, and the training set should encompass a wide range of atomic structures to guarantee the accuracy of predictions. However, in fields such as phaseion9,10, catalysis11,12, and crystal growth13,14, the configurational space that needs to be explored is highly complex. This complexity makes it challenging to sample sufficient data for training and easy to make a potential that is not smooth enough to extrapolate to every relevant point. Consequently, a distribution shift between training and test datasets often occurs, which causes the degradation of test performance and leads to the emergence of unrealistic atomic structures, and finally the MD simulations collapse15.

Imagine being able to see quantum objects with your own eyes — no microscopes needed. That’s exactly what researchers at TU Wien and ISTA have achieved with superconducting circuits, artificial atoms that are massive by quantum standards.

Unlike natural atoms, these structures can be engineered to have customizable properties, allowing scientists to control energy levels and interactions in ways never before possible. By coupling them, they’ve developed a method to store and retrieve light, laying the groundwork for revolutionary quantum technologies. These engineered systems also enable precise quantum pulses and act as a kind of quantum memory, offering an unprecedented level of control over light at the quantum level.

Gigantic Quantum Objects – Visible to the Naked Eye.

A strontium optical clock produces about 50,000 times more oscillations per second than a cesium clock, the basis for the current definition of a second.

Advances in atomic clocks may lead to a redefinition of the second, replacing the caesium standard (recent work on thorium nuclear transitions is still a long way from taking that role).

Also, NIST uses egg incubators(!) to control temperature & humidity.


New atomic clocks are more accurate than those used to define the second, suggesting the definition might need to change.

Since their invention, traditional computers have almost always relied on semiconductor chips that use binary “bits” of information represented as strings of 1’s and 0’s. While these chips have become increasingly powerful and simultaneously smaller, there is a physical limit to the amount of information that can be stored on this hardware. Quantum computers, by comparison, utilize “qubits” (quantum bits) to exploit the strange properties exhibited by subatomic particles, often at extremely cold temperatures.

Two qubits can hold four values at any given time, with more qubits translating to an exponential increase in calculating capabilities. This allows a quantum computer to process information at speeds and scales that make today’s supercomputers seem almost antiquated. Last December, for example, Google unveiled an experimental quantum computer system that researchers say takes just five minutes to finish a calculation that would take most supercomputers over 10 septillion years to complete—longer than the age of the universe as we understand it.

But Google’s Quantum Processing Unit (QPU) is based on different technology than Microsoft’s Majorana 1 design, detailed in a paper published on February 19 in the journal Nature. The result of over 17 years of design and research, Majorana 1 relies on what the company calls “topological qubits” through the creation of topological superconductivity, a state of matter previously conceptualized but never documented.

An interesting glimpse into the adventurous world of neutrino research in Antarctica!


At McMurdo, Karle must wait for the weather to permit the final leg of the trip. “It is not uncommon to spend several days in McMurdo,” he says. (Karle’s record is 10.) When it’s time, he takes a 3.5-hour flight on a ski-equipped LC-130 aircraft to reach the South Pole. Anyone or anything else that goes to the South Pole must take a similarly tedious route.

There’s a reason scientists have endured the challenges of the climate, the commute and the cost for over half a century—since members of the US Navy completed the original Amundsen–Scott South Pole Station in 1957. Despite all the trouble it takes to get there, the South Pole is an unparalleled environment for scientific research, from climate science and glaciology to particle physics and astrophysics.

This sentiment was echoed by the Particle Physics Project Prioritization Panel in its 2023 report, a decadal plan for the future of particle physics research in the United States. Under its recommendation to “Construct a portfolio of major projects that collectively study nearly all fundamental constituents of our universe and their interactions,” the report prioritized support for five specific projects—two of which are located at the South Pole: cosmic microwave background experiment CMB-S4, the top priority, and neutrino experiment IceCube-Gen2, recommended fifth. Because of the high scientific priority of these projects, the report also urged maintenance of the South Pole site.

Scientists have now mapped the forces acting inside a proton, showing in unprecedented detail how quarks—the tiny particles within—respond when hit by high-energy photons.

The international team includes experts from the University of Adelaide who are exploring the structure of sub-atomic matter to try and provide further insight into the forces that underpin the .

“We have used a powerful computational technique called lattice quantum chromodynamics to map the forces acting inside a ,” said Associate Professor Ross Young, Associate Head of Learning and Teaching, School of Physics, Chemistry and Earth Sciences, who is part of the team.