Toggle light / dark theme

In today’s AI news, Backed by $200 million in funding, Scott Wu and his team at Cognition are building an AI tool that could potentially disintegrate the whole industry, at a $2 Billion valuation. Devin is an autonomous AI agent that, in theory, writes the code itself—no people involved—and can complete entire projects typically assigned to developers.

In other advancements, OpenAI is changing how it trains AI models to explicitly embrace “intellectual freedom … no matter how challenging or controversial a topic may be,” the company says in a new policy. OpenAI is releasing a significantly expanded version of its Model Spec, a document that defines how its AI models should behave — and is making it free for anyone to use or modify.

Then, xAI, the artificial intelligence company founded by Elon Musk, is set to launch Grok 3 on Monday, Feb. 17. According to xAI, this latest version of its chatbot, which Musk describes as “scary smart,” represents a major step forward, improving reasoning, computational power and adaptability. Grok 3’s development was accelerated by its Colossus supercomputer, which was built in just eight months, powered by 100,000 Nvidia H100 GPUs.

And, large language models can learn complex reasoning tasks without relying on large datasets, according to a new study by researchers at Shanghai Jiao Tong University. Their findings show that with just a small batch of well-curated examples, you can train an LLM for tasks that were thought to require tens of thousands of training instances.

S new o1 model, which focuses on slower, more deliberate reasoning — much like how humans think — in order to solve complex problems. ” + Then, join Turing Award laureate Yann LeCun—Chief AI Scientist at Meta and Professor at NYU—as he discusses with Link Ventures’ John Werner, the future of artificial intelligence and how open-source development is driving innovation. In this wide-ranging conversation, LeCun explains why AI systems won’t “take over” but will instead serve as empowering assistants.

Time, by its very nature, is a paradox. We live anchored in the present, yet we are constantly traveling between the past and the future—through memories and aspirations alike. Technological advancements have accelerated this relationship with time, turning what was once impossible into a tangible reality. At the heart of this transformation lies Artificial Intelligence (AI), which, far from being just a tool, is becoming an extension of the human experience, redefining how we interact with the world.

In the past, automatic doors were the stuff of science fiction. Paper maps were essential for travel. Today, these have been replaced by smart sensors and navigation apps. The smartphone, a small device that fits in the palm of our hand, has become an extension of our minds, connecting us to the world instantly. Even its name reflects its evolution—from a mere mobile phone to a “smart” device, now infused with traces of intelligence, albeit artificial.

And it is in this landscape that AI takes center stage. The debate over its risks and benefits has been intense. Many fear a stark divide between humans and machines, as if they are destined for an inevitable clash. But what if, instead of adversaries, we saw technology as an ally? The fusion of human and machine is already underway, quietly shaping our daily lives.

When applied effectively, AI becomes a discreet assistant, capable of anticipating our needs and enhancing productivity. Studies suggest that by 2035, AI could double annual economic growth, transforming not only business but society as a whole. Naturally, some jobs will disappear, but new ones will emerge. History has shown that evolution is inevitable and that the future belongs to those who adapt.

But what about AI’s role in our personal lives? From music recommendations tailored to our mood to virtual assistants that complete our sentences before we do, AI is already recognizing behavioral patterns in remarkable ways. Through Machine Learning, computer systems do more than just store data—they learn from it, dynamically adjusting and improving. Deep Learning takes this concept even further, simulating human cognitive processes to categorize information and make decisions based on probabilities.

But what if the relationship between humans and machines could transcend time itself? What if we could leave behind an interactive digital legacy that lives on forever? This is where a revolutionary concept emerges: digital immortality.

ETER9 is a project that embodies this vision, exploring AI’s potential to preserve interactive memories, experiences, and conversations beyond physical life. Imagine a future where your great-grandchildren could “speak” with you, engaging with a digital presence that reflects your essence. More than just photos or videos, this would be a virtual entity that learns, adapts, and keeps individuality alive.

The truth is, whether we realize it or not, we are all being shaped by algorithms that influence our online behavior. Platforms like Facebook are designed to keep us engaged for as long as possible. But is this the right path? A balance must be found—a point where technology serves humanity rather than the other way around.

We don’t change the world through empty criticism. We change it through innovation and the courage to challenge the status quo. Surrounding ourselves with intelligent people is crucial; if we are the smartest in the room, perhaps it’s time to find a new room.

The future has always fascinated humanity. The unknown evokes fear, but it also drives progress. Many of history’s greatest inventions were once deemed impossible. But “impossible” is only a barrier until it is overcome.

Sometimes, it feels like we are living in the future before the world is ready. But maturity is required to absorb change. Knowing when to pause and when to move forward is essential.

And so, in a present that blends with the future, we arrive at the ultimate question:

What does it mean to be eternal?

Perhaps the answer lies in our ability to dream, create, and leave a legacy that transcends time.

After all, isn’t digital eternity our true journey through time?

__
Copyright © 2025, Henrique Jorge

Physicists have performed a groundbreaking simulation they say sheds new light on an elusive phenomenon that could determine the ultimate fate of the Universe.

Pioneering research in quantum field theory around 50 years ago proposed that the universe may be trapped in a false vacuum – meaning it appears stable but in fact could be on the verge of transitioning to an even more stable, true vacuum state. While this process could trigger a catastrophic change in the Universe’s structure, experts agree that predicting the timeline is challenging, but it is likely to occur over an astronomically long period, potentially spanning millions of years.

In an international collaboration between three research institutions, the team report gaining valuable insights into false vacuum decay – a process linked to the origins of the cosmos and the behaviour of particles at the smallest scales. The collaboration was led by Professor Zlatko Papic, from the University of Leeds, and Dr Jaka Vodeb, from the Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich, Germany.

A game of chess requires its players to think several moves ahead, a skill that computer programs have mastered over the years. Back in 1996, an IBM supercomputer famously beat the then world chess champion Garry Kasparov. Later, in 2017, an artificial intelligence (AI) program developed by Google DeepMind, called AlphaZero, triumphed over the best computerized chess engines of the time after training itself to play the game in a matter of hours.

More recently, some mathematicians have begun to actively pursue the question of whether AI programs can also help in cracking some of the world’s toughest problems. But, whereas an average game of chess lasts about 30 to 40 moves, these research-level math problems require solutions that take a million or more steps, or moves.

In a paper appearing on the arXiv preprint server, a team led by Caltech’s Sergei Gukov, the John D. MacArthur Professor of Theoretical Physics and Mathematics, describes developing a new type of machine-learning algorithm that can solve math problems requiring extremely long sequences of steps. The team used their to solve families of problems related to an overarching decades-old math problem called the Andrews–Curtis conjecture. In essence, the algorithm can think farther ahead than even advanced programs like AlphaZero.

In a milestone that brings quantum computing tangibly closer to large-scale practical use, scientists at Oxford University’s Department of Physics have demonstrated the first instance of distributed quantum computing. Using a photonic network interface, they successfully linked two separate quantum processors to form a single, fully connected quantum computer, paving the way to tackling computational challenges previously out of reach. The results have been published in Nature.