Toggle light / dark theme

Google’s plan for space-based computing

The sun produces more power than 100 trillion times humanity’s entire electricity generation. In orbit, solar panels can be eight times more productive than their Earth-bound counterparts, generating energy almost continuously without the need for heavy battery storage. These facts have led a team of Google researchers to ask what if the best place to scale artificial intelligence isn’t on Earth at all, but in space?

Project Suncatcher, Google’s latest space mission, envisions constellations of solar-powered satellites equipped with processors and connected by laser-based optical links. The concept tackles one of AI’s most pressing challenges, the enormous energy demands of large-scale machine learning systems, by tapping directly into the solar system’s ultimate power source. A new research paper published by Google describes their progress toward addressing the technical challenges.

The proposed system would operate in a sun-synchronous low Earth orbit, where satellites remain in almost constant sunlight. This orbital choice maximizes solar energy collection while minimizing battery requirements. However, making space-based AI infrastructure viable requires solving several formidable engineering challenges.

To Meld A.I. With Supercomputers, National Labs Are Picking Up the Pace

For years, Rick Stevens, a computer scientist at Argonne National Laboratory, pushed the notion of transforming scientific computing with artificial intelligence.

But even as Mr. Stevens worked toward that goal, government labs like Argonne — created in 1946 and sponsored by the Department of Energy — often took five years or more to develop powerful supercomputers that can be used for A.I. research. Mr. Stevens watched as companies like Amazon, Microsoft and Elon Musk’s xAI made faster gains by installing large A.I. systems in a matter of months.

Ursula Eysin on Uncertainty and Future Scenarios

How do we turn uncertainty from a threat into an advantage?

Three years ago, I sat down with someone who has built her entire career around that question: Ursula Eysin, founder of Red Swan and one of the most multidimensional futurists I’ve ever met.

Ursula is a trained ballerina who speaks seven languages, reads chemistry books for fun, mentors startups, and teaches at five universities — and somehow still finds time to help leaders navigate the unknown with clarity and courage.

In this conversation, we dig into: • Why predicting the future is a powerless position • Scenario planning vs. futurism — and why leaders need both • How to reframe uncertainty as a strategic asset • What it truly means to connect as humans in an age of AI • And why strong, diverse leadership matters more than ever.

My favourite line from Ursula remains razor-sharp:

“Turn uncertainty into an advantage. See it as a gift. And connect to other people.”

If you’re steering a team, a company, or even your own life through volatility, this one is worth your time.

How the French philosopher Jean Baudrillard predicted today’s AI 30 years before ChatGPT

One of the most important members of this enlightened club is the philosopher Jean Baudrillard – even though his reputation over the past couple of decades has diminished to an association with a now bygone era when fellow French theorists such as Roland Barthes and Jacques Derrida reigned supreme.

In writing our new biography of Baudrillard, however, we have been reminded just how prescient his predictions about modern technology and its effects have turned out to be. Especially insightful is his understanding of digital culture and AI – presented over 30 years before the launch of ChatGPT.

Back in the 1980s, cutting-edge communication technology involved devices which seem obsolete to us now: answering machines, fax machines, and (in France) Minitel, an interactive online service that predated the internet. But Baudrillard’s genius lay in foreseeing what these relatively rudimentary devices suggested about likely future uses of technology.

Argonaut lunar lander family grows

Today, the European Space Agency’s Argonaut lunar lander programme welcomes new members to its growing family. At ESA’s European Astronaut Centre (EAC) near Cologne, Germany, Thales Alenia Space Italy – the prime contractor for Argonaut’s first lander – signed agreements with Thales Alenia Space in France, OHB in Germany, and Thales Alenia Space and Nammo in the United Kingdom.

Argonaut represents Europe’s autonomous, versatile and reliable access to the Moon. Starting with the first mission in 2030, Argonaut landers will be launched on Ariane 6 rockets, each delivering up to 1.5 tonnes of exploration-enabling cargo to the Moon’s surface, from scientific instruments and rovers to vital resources for astronauts such as food, water and air.

Earlier this year, ESA selected Thales Alenia Space Italy to lead the development of the first Argonaut lander, or Lunar Descent Element. Today’s signing ceremony took place in a symbolic location: the LUNA analogue facility at EAC, home to a full-scale Argonaut model – a tangible vision of Europe’s future presence on the Moon.

Machine learning algorithm rapidly reconstructs 3D images from X-ray data

Soon, researchers may be able to create movies of their favorite protein or virus better and faster than ever before. Researchers at the Department of Energy’s SLAC National Accelerator Laboratory have pioneered a new machine learning method—called X-RAI (X-Ray single particle imaging with Amortized Inference)—that can “look” at millions of X-ray laser-generated images and create a three-dimensional reconstruction of the target particle. The team recently reported their findings in Nature Communications.

X-RAI’s ability to sort through a massive number of images and learn as it goes could unlock limits in data-gathering, allowing researchers to see molecules up close—and perhaps even on the move. “There is really no limit” to the dataset size it can handle, said SLAC staff scientist Frédéric Poitevin, one of the study’s principal investigators.

Humans bring gender bias to their interactions with AI, finds study

Humans bring gender biases to their interactions with Artificial Intelligence (AI), according to new research from Trinity College Dublin and Ludwig-Maximilians Universität (LMU) Munich.

The study involving 402 participants found that people exploited female-labeled AI and distrusted male-labeled AI to a comparable extent as they do human partners bearing the same gender labels.

Notably, in the case of female-labeled AI, the study found that exploitation in the Human-AI setting was even more prevalent than in the case of human partners with the same gender labels.

/* */