Toggle light / dark theme

Red light exposure may reduce blood clot risks, according to groundbreaking research. By lowering inflammation and platelet activity, it could prevent strokes, heart attacks, and more. Clinical trials are next.


The ability of released products of platelet activation to induce thrombosis-generating neutrophil extracellular trap formation was quantified. Subsequent thrombosis was measured using murine models of VT and stroke.

To translate our findings to human patients, light-filtering cataract patients were evaluated over an 8-year period for rate of venous thromboembolism with multivariable logistic regression clustered by hospital.

Exposure to long-wavelength red light resulted in reduced platelet aggregation and activation. RNA-seq analysis demonstrated no significant transcriptomic changes between micered and micewhite.

CoreWeave, the cloud computing company that provides companies with AI compute resources, has formally opened its first two data centers in the U.K. — its first outside its domestic U.S. market.

CoreWeave opened its European headquarters in London last May, shortly after earning a $19 billion valuation off the back of a $1.1. billion fundraise. At the same time, the company announced plans to open two data centers as part of a £1 billion ($1.25 billion) investment in the U.K.

Today’s news coincides with a separate announcement from the U.K. government, which details a five-year investment plan to bolster government-owned AI computing capacity as well as geographic “AI Growth Zones,” which includes AI infrastructure from the private sector.

The amorphous state of matter is the most abundant form of visible matter in the universe, and includes all structurally disordered systems, such as biological cells or essential materials like glass and polymers.

An is a solid whose molecules and atoms form disordered structures, meaning that they do not occupy regular, well-defined positions in space.

This is the opposite of what happens in crystals, whose ordered structure facilitates their , as well as the identification of those “defects,” which practically control the physical properties of crystals, such as their plastic yielding and melting, or the way an electric current propagates through them.

The past year, 2024, witnessed an array of groundbreaking technological advancements that fundamentally reshaped industries and influenced the global economy. Technology trends like the development of Industry LLMs, Sustainable Computing, and the Augmented Workforce drove innovation, fostered efficiency, and accelerated the pace of Digital Transformation across sectors such as Healthcare, Finance, and Manufacturing. These developments set the stage for even more disruptive Technology Trends in 2025.

This year is set to bring transformative changes to the business landscape, driven by emerging trends that require enterprises to adopt the right technologies, reskill their workforce, and prioritize sustainability. By embracing these Technology Trends, businesses can shape their objectives, remain competitive, and build resilience. However, Success in this rapidly evolving landscape depends not just on adopting these technologies but also on strategically leveraging them to drive innovation and growth.

Being able to erase bad memories and traumatic flashbacks could help in the treatment of a host of different mental health issues, and scientists have found a promising new approach to do just this: weakening negative memories by reactivating positive ones.

In an experiment covering several days, an international team of researchers asked 37 participants to associate random words with negative images, before attempting to reprogram half of those associations and ‘interfere’ with the bad memories.

“We found that this procedure weakened the recall of aversive memories and also increased involuntary intrusions of positive memories,” write the researchers in their published paper.

Artificial intelligence is moving from data centers to “the edge” as computer makers build the technology into laptops, robots, cars and more devices closer to home.

The Consumer Electronics Show (CES) gadget extravaganza that closed Friday was rife with PCs and other devices touting AI chips, making them more capable than ever and untethering them from the cloud.

Attention-grabbing stars included “AI PCs,” personal computers boasting chips that promised a level of performance once limited to muscular data centers.

An international team of astronomers, led by researchers from the Astronomical Observatory of the University of Warsaw, have identified a new class of cosmic X-ray sources. The findings have been published in The Astrophysical Journal Letters.

Most people encounter X-rays during medical visits where they are used to create images of bones or diagnose lung conditions. These X-rays are generated using artificial sources.

However, not everyone knows that celestial objects can also emit X-ray radiation. “Some cosmic phenomena produce X-rays naturally,” explains Dr. Przemek Mróz, the lead author of the study. “For example, X-rays may be produced by a hot gas falling onto compact objects like white dwarfs, neutron stars, or black holes. X-rays can also be generated by decelerating charged particles, such as electrons.”

Large language models (LLMs), the most renowned of which is ChatGPT, have become increasingly better at processing and generating human language over the past few years. The extent to which these models emulate the neural processes supporting language processing by the human brain, however, has yet to be fully elucidated.

Researchers at Columbia University and Feinstein Institutes for Medical Research Northwell Health recently carried out a study investigating the similarities between LLM representations on neural responses. Their findings, published in Nature Machine Intelligence, suggest that as LLMs become more advanced, they do not only perform better, but they also become more brain-like.

“Our original inspiration for this paper came from the recent explosion in the landscape of LLMs and neuro-AI research,” Gavin Mischler, first author of the paper, told Tech Xplore.