AI and compute consume vast amounts of power but quantum computing can offer a solution, depending on its underlying platform, offering choices to scale.
Deep neural networks (DNNs) have become a cornerstone of modern AI technology, driving a thriving field of research in image-related tasks. These systems have found applications in medical diagnosis, automated data processing, computer vision, and various forms of industrial automation, to name a few.
As reliance on AI models grows, so does the need to test them thoroughly using adversarial examples. Simply put, adversarial examples are images that have been strategically modified with noise to trick an AI into making a mistake. Understanding adversarial image generation techniques is essential for identifying vulnerabilities in DNNs and for developing more secure, reliable systems.
Global navigation satellite systems (GNSS) are vital for positioning autonomous vehicles, buses, drones, and outdoor robots. Yet its accuracy often degrades in dense urban areas due to signal blockage and reflections.
Now, researchers have developed a GNSS-only method that delivers stable, accurate positioning without relying on fragile carrier-phase ambiguity resolution. Tested across six challenging urban scenarios, the approach consistently outperformed existing methods, enabling safer and more reliable autonomous navigation.
AI might know where you’re going before you do. Researchers at Northeastern University used large language models, the kind of advanced artificial intelligence normally designed to process and generate language, to predict human movement.
How RHYTHM predicts human movement RHYTHM, their innovative tool, “can revolutionize the forecasting of human movements,” forecasting “where you’re going to be in the next 30 minutes or the next 25 hours,” said Ryan Wang, an associate professor and vice chair of research in civil and environmental engineering at Northeastern.
The hope is that RHYTHM will improve domains like transportation and traffic planning to make our lives easier, but in extreme cases, RHYTHM could even be deployed to respond to natural disasters, highway accidents and terrorist attacks.
Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to prefer specific stimuli, ideas or groups that deviate from objectivity.
One of these biases, known as the “us vs. them” bias, is the tendency of people to prefer groups they belong to, viewing other groups less favorably. This effect is well-documented in humans, but it has so far remained largely unexplored in LLMs.
Researchers at University of Vermont’s Computational Story Lab and Computational Ethics Lab recently carried out a study investigating the possibility that LLMs “absorb” the “us vs. them” bias from the texts that they are trained on, exhibiting a similar tendency to prefer some groups over others. Their paper, posted to the arXiv preprint server, suggests that many widely used models tend to express a preference for groups that are referred to favorably in training texts, including GPT-4.1, DeepSeek-3.1, Gemma-2.0, Grok-3.0 and LLaMA-3.1.
Harnessing the power of artificial intelligence to study plant microbiomes—communities of microbes living in and around plants—could help improve soil health, boost crop yields, and restore degraded lands. But there’s a catch: AI needs massive amounts of reliable data to learn from, and that kind of consistent information about plant-microbe interactions has been hard to come by.
In a new paper in PLOS Biology, researchers in the Biosciences Area at Lawrence Berkeley National Laboratory (Berkeley Lab) led an international consortium of scientists to study whether small plastic growth chambers called EcoFABs could help solve this problem.
Building on their previous work with microbe-free plants, the scientists used the Berkeley Lab-developed devices to run identical plant–microbe experiments across labs on three continents and got matching results. The breakthrough shows that EcoFABs can remove one of the biggest barriers in microbiome research: the difficulty of reproducing experiments in different places.
Enzymes with specific functions are becoming increasingly important in industry, medicine and environmental protection. For example, they make it possible to synthesize chemicals in a more environmentally friendly way, produce active ingredients in a targeted manner or break down environmentally harmful substances.
Researchers from Gustav Oberdorfer’s working group at the Institute of Biochemistry at Graz University of Technology (TU Graz), together with colleagues from the University of Graz, have now published a study in Nature describing a new method for the design of customized enzymes.
The technology called Riff-Diff (Rotamer Inverted Fragment Finder–Diffusion) makes it possible to accurately and efficiently build the protein structure specifically around the active center instead of searching for a suitable structure from existing databases. The resulting enzymes are not only significantly more active than previous artificial enzymes, but also more stable.
The developer of the popular curl command-line utility and library announced that the project will end its HackerOne security bug bounty program at the end of this month, after being overwhelmed by low-quality AI-generated vulnerability reports.
The change was first discovered in a pending commit to curl’s BUG-BOUNTY.md documentation, which removes all references to the HackerOne program.
Once merged, the file will be updated to state that the curl project no longer offers any rewards for reported bugs or vulnerabilities and will not help researchers obtain compensation from third parties either.
Wondering what your career looks like in our increasingly uncertain, AI-powered future? According to Palantir CEO Alex Karp, it’s going to involve less of the comfortable office work to which most people aspire, a more old fashioned grunt work with your hands.
Speaking at the World Economic Forum yesterday, Karp insisted that the future of work is vocational — not just for those already in manufacturing and the skilled trades, but for the majority of humanity.
In the age of AI, Karp told attendees at a forum, a strong formal education in any of the humanities will soon spell certain doom.