Toggle light / dark theme

This is how the future is made.


Sailing through the smooth waters of vacuum, a photon of light moves at around 300 thousand kilometers (186 thousand miles) a second. This sets a firm limit on how quickly a whisper of information can travel anywhere in the Universe.

While this law isn’t likely to ever be broken, there are features of light which don’t play by the same rules. Manipulating them won’t hasten our ability to travel to the stars, but they could help us clear the way to a whole new class of laser technology.

Physicists have been playing hard and fast with the speed limit of light pulses for a while, speeding them up and even slowing them to a virtual stand-still using various materials like cold atomic gases, refractive crystals, and optical fibers.

An international research team analyzed a database of more than 1000 supernova explosions and found that models for the expansion of the Universe best match the data when a new time dependent variation is introduced. If proven correct with future, higher-quality data from the Subaru Telescope and other observatories, these results could indicate still unknown physics working on the cosmic scale.

Edwin Hubble’s observations over 90 years ago showing the expansion of the Universe remain a cornerstone of modern astrophysics. But when you get into the details of calculating how fast the Universe was expanding at different times in its history, scientists have difficulty getting theoretical models to match observations.

To solve this problem, a team led by Maria Dainotti (Assistant Professor at the National Astronomical Observatory of Japan and the Graduate University for Advanced Studies, SOKENDAI in Japan and an affiliated scientist at the Space Science Institute in the U.S.A.) analyzed a catalog of 1048 supernovae which exploded at different times in the history of the Universe. The team found that the theoretical models can be made to match the observations if one of the constants used in the equations, appropriately called the Hubble constant, is allowed to vary with time.

A University of California San Diego engineering professor has solved one of the biggest mysteries in geophysics: What causes deep-focus earthquakes?

These mysterious earthquakes originate between 400 and 700 kilometers below the surface of the Earth and have been recorded with magnitudes up to 8.3 on the Richter scale.

Xanthippi Markenscoff, a distinguished professor in the Department of Mechanical and Aerospace Engineering at the UC San Diego Jacobs School of Engineering, is the person who solved this mystery. Her paper “Volume collapse instabilities in deep earthquakes: a shear source nucleated and driven by pressure” appears in the Journal of the Mechanics and Physics of Solids.

The findings could lead to faster, more secure memory storage, in the form of antiferromagnetic bits.

When you save an image to your smartphone, those data are written onto tiny transistors that are electrically switched on or off in a pattern of “bits” to represent and encode that image. Most transistors today are made from silicon, an element that scientists have managed to switch at ever-smaller scales, enabling billions of bits, and therefore large libraries of images and other files, to be packed onto a single memory chip.

But growing demand for data, and the means to store them, is driving scientists to search beyond silicon for materials that can push memory devices to higher densities, speeds, and security.

If you are a space enthusiast, there is some good news for you. In a new research, that could possibly open doors to many unknown aspects of the Universe, researchers have detected a resonant “hum” produced by the gravitational waves in the Universe. Experts say this can be imagined as a gravitational wave background of the Universe.

This hum of the Universe was reportedly detected by the North American Nanohetz Observatory for Gravitational Waves (NANOGrav), and the findings of the research was published in The Astrophysical Journal Letters.

In a report, ScienceAlert said this gravitational wave background can be imagined as “something like the ringing left behind by massive events throughout our Universe’s history”.

Using neural networks, Flatiron Institute research fellow Yin Li and his colleagues simulated vast, complex universes in a fraction of the time it takes with conventional methods.

Using a bit of machine learning magic, astrophysicists can now simulate vast, complex universes in a thousandth of the time it takes with conventional methods. The new approach will help usher in a new era in high-resolution cosmological simulations, its creators report in a study published online on May 4, 2021, in Proceedings of the National Academy of Sciences.

“At the moment, constraints on computation time usually mean we cannot simulate the universe at both high resolution and large volume,” says study lead author Yin Li, an astrophysicist at the Flatiron Institute in New York City. “With our new technique, it’s possible to have both efficiently. In the future, these AI-based methods will become the norm for certain applications.”

For millennia, humans in the high latitudes have been enthralled by auroras—the northern and southern lights. Yet even after all that time, it appears the ethereal, dancing ribbons of light above Earth still hold some secrets.

In a new study, physicists led by the University of Iowa report a new feature to Earth’s atmospheric light show. Examining video taken nearly two decades ago, the researchers describe multiple instances where a section of the diffuse —the faint, background-like glow accompanying the more vivid light commonly associated with auroras—goes dark, as if scrubbed by a giant blotter. Then, after a short period of time, the blacked-out section suddenly reappears.

The researchers say the behavior, which they call “diffuse auroral erasers,” has never been mentioned in the . The findings appear in the Journal of Geophysical Research Space Physics.

A team of researchers from Zhejiang University, Xi’an Jiaotong University and Monash University has developed a way to bind multiple strands of graphene oxide into a thick cable. In their paper published in the journal Science, the group describes their process and possible uses for it. Rodolfo Cruz-Silva and Ana Laura Elías with Shinshu University and Binghamton University have published a Perspectives piece in the same issue outlining the work by the researchers and explaining why they believe the technique could prove useful in manufacturing efforts.

In recent years, have been exploring the possibility of making products using total or partial self-assembly as a way to produce them faster or at less cost. In where two materials self-assemble into a third material, scientists describe this as a fusion process, borrowing terminology from physics. So when a single material spontaneously separates into two or more other materials, they refer to it as a fission process. In this new effort, the researchers have developed a technique for creating graphene-oxide-based yarn that exploits both processes.

The work by the team is very basic. They created multiple strands of graphene oxide and then dunked them into a solvent for 10 minutes. When the strands were pulled from the solution, they banded together forming a cord, or single strand of yarn. They also developed a means for reversing the process—dunking the strand of yarn in a different solvent solution.