Toggle light / dark theme

Risk assessment and fraud detection can be enhanced with its usage in the financial sector.


A UK-based firm has launched the world’s first quantum large language model (QLLM). Developed by SECQAI, the QLLM is claimed to be capable of shaping the future of AI.

The company integrated quantum computing into traditional AI models to improve efficiency and problem-solving.

According to a report, the development involved creating an in-house quantum simulator with gradient-based learning and a quantum attention mechanism.

The quest to halt or reverse aging has long captivated human imagination. By 2032, could artificial intelligence (AI) make this aspiration a reality? Futurist Ray Kurzweil, renowned for his forward-thinking predictions, believes so. He envisions a future where AI plays a pivotal role in achieving “longevity escape velocity,” a state where life expectancy increases more than one year per year, effectively outpacing aging.

On September 20, as part of the TRIADS Speaker Series, philosopher David Chalmers will visit WashU to pose a seemingly straightforward question: “Can ChatGPT Think?”

While Chalmers isn’t in the business of providing a direct “yes” or “no” answer to philosophical quandaries like these, he’s perhaps one of the best-qualified minds to ask the question and unravel its potential implications. Whether in the form of books or TED Talks, Chalmers has grappled with the nature of human consciousness for the better part of three decades. And on a parallel track, he has kept a close eye on the development of artificial intelligence, penning journal articles on the subject and presenting at AI conferences since the early ’90s.

Chalmers, now a New York University Professor of Philosophy and Director of the NYU Center for Mind, Brain, and Consciousness, met via Zoom to discuss the marvels and mysteries of ChatGPT, how he uses philosophical questions to gauge the progress of large language models, and his two years spent at Washington University as a postdoctoral fellow.

Mondal, S., Maity, R. & Nag, A. Sci Rep 15, 4,827 (2025). https://doi.org/10.1038/s41598-025-85765-x.

Download citation.

A small team of AI researchers from Stanford University and the University of Washington has found a way to train an AI reasoning model for a fraction of the price paid by big corporations that produce widely known products such as ChatGPT. The group has posted a paper on the arXiv preprint server describing their efforts to inexpensively train chatbots and other AI reasoning models.

Corporations such as Google and Microsoft have made clear their intentions to be leaders in the development of chatbots with ever-improving skills. These efforts are notoriously expensive and tend to involve the use of energy-intensive server farms.

More recently, a Chinese company called DeepSeek released an LLM equal in capabilities to those being produced by countries in the West developed at far lower cost. That announcement sent for many into a nosedive.

Extraterrestrial landers sent to gather samples from the surface of distant moons and planets have limited time and battery power to complete their mission. Aerospace and computer science engineering researchers at The Grainger College of Engineering, University of Illinois Urbana-Champaign trained a model to autonomously assess and scoop quickly, then watched it demonstrate its skill on a robot at a NASA facility.

Aerospace Ph.D. student Pranay Thangeda said they trained their robotic lander arm to collect scooping data on a variety of materials, from sand to rocks, resulting in a database of 6,700 points of knowledge. The two terrains in NASA’s Ocean World Lander Autonomy Testbed at the Jet Propulsion Laboratory were brand new to the model that operated the JPL robotic arm remotely.

The study, “Learning and Autonomy for Extraterrestrial Terrain Sampling: An Experience Report from OWLAT Deployment,” was published in the AIAA Scitech Forum.

In today’s AI news, OpenAI released its o3-mini model one week ago, offering both free and paid users a more accurate, faster, and cheaper alternative to o1-mini. Now, OpenAI has updated the o3-mini to include an updated chain of thought.

In other advancements, Hugging Face and Physical Intelligence have quietly launched Pi0 (Pi-Zero) this week, the first foundational model for robots that translates natural language commands directly into physical actions. “Pi0 is the most advanced vision language action model,” said Remi Cadene, a research scientist at Hugging Face.

S Luxo Jr., Apple And, one year later the Rabbit R1 is actually good now. It launched to reviews like “avoid this AI gadget”, but 12 months have passed. Where is the Rabbit R1 now? Well with a relentless pipeline of updates and novel AI ideas…it’s actually pretty good now!?

In videos, Moderator Shirin Ghaffary (Reporter, Bloomberg News) leads a expert panel which includes; Chase Lochmiller (Crusoe, CEO) Costi Perricos (Deloitte, Global GenAI Business Leader) Varun Mohan (Codeium, Co-Founder and CEO) that ask, how are we building the infrastructure to support this massive global technological revolution?

Meanwhile, Humans are terrible at detecting lies, says psychologist Riccardo Loconte… but what if we had an AI-powered tool to help? He introduces his team’s work successfully training an AI to recognize falsehoods.