Toggle light / dark theme

AI-powered LED system delivers stable wireless power for indoor IoT devices

The world’s first automatic and adaptive, dual-mode light-emitting diode (LED)-based optical wireless power transmission system, that operates seamlessly under both dark and bright lighting conditions, has been developed by scientists at Science Tokyo. The system, along with artificial intelligence-powered image recognition, can efficiently power multiple devices in order without interruption. Because it is LED-based, it offers a low-cost and safe solution ideal for building sustainable indoor Internet of Things infrastructure.

With the rapid development of Internet of Things (IoT), the demand for efficient and flexible power solutions is also increasing. Traditional power delivery methods, such as batteries and cable connections, have many drawbacks. Batteries need frequent charging and replacement, while cables restrict device mobility.

Optical wireless power transmission (OWPT) is an emerging technology that can address these limitations. In OWPT, energy is transmitted through , without physical wires, by converting electricity to light, transmitting it, and then reconverting light back into using photovoltaic (PV) receivers.

AI math genius delivers 100% accurate results

At the 2024 International Mathematical Olympiad (IMO), one competitor did so well that it would have been awarded the Silver Prize, except for one thing: it was an AI system. This was the first time AI had achieved a medal-level performance in the competition’s history. In a paper published in the journal Nature, researchers detail the technology behind this remarkable achievement.

The AI is AlphaProof, a sophisticated program developed by Google DeepMind that learns to solve complex mathematical problems. The achievement at the IMO was impressive enough, but what really makes AlphaProof special is its ability to find and correct errors. While (LLMs) can solve , they often can’t guarantee the accuracy of their solutions. There may be hidden flaws in their reasoning.

AlphaProof is different because its answers are always 100% correct. That’s because it uses a specialized software environment called Lean (originally developed by Microsoft Research) that acts like a strict teacher verifying every logical step. This means the computer itself verifies answers, so its conclusions are trustworthy.

Nature-inspired navigation system helps robots traverse complex environments without GPS

Robots could soon be able to autonomously complete search and rescue missions, inspections, complex maintenance operations and various other real-world tasks. To do this, however, they should be able to smoothly navigate unknown and complex environments without breaking down or getting stuck, which would require human intervention.

Most autonomous navigation systems rely on global positioning systems (GPS), which can provide information about where a robot is located within a map. In many environments, however, including caves, unstructured spaces and collapsed buildings, GPS systems either do not work or become unreliable.

Researchers at Beijing Institute of Technology recently developed a new nature-inspired system that could improve robot navigation in unstructured and complex environments, without relying on GPS technology. Their proposed framework— outlined in a paper set to be published in Cell Press and currently available on the SSRN preprint server—is inspired by three distinct biological navigation strategies observed in insects, birds and rodents.

Anthropic claims of Claude AI-automated cyberattacks met with doubt

Anthropic reports that a Chinese state-sponsored threat group, tracked as GTG-1002, carried out a cyber-espionage operation that was largely automated through the abuse of the company’s Claude Code AI model.

However, Anthropic’s claims immediately sparked widespread skepticism, with security researchers and AI practitioners calling the report “made up” or the company of overstating the incident.

“I agree with Jeremy Kirk’s assessment of the Anthropic’s GenAI report. It’s odd. Their prior one was, too,” cybersecurity expert Kevin Beaumont posted on Mastodon.

Magnetically Guided Microrobots Deliver Drugs with Pinpoint Accuracy

After numerous successful trials in the model, the team sought to demonstrate what the microrobot could achieve under real clinical conditions. First, they were able to demonstrate in pigs that all three navigation methods worked and that the microrobot remains clearly visible throughout the entire procedure. The investigators then navigated microrobots through the cerebral fluid of a sheep.

“This complex anatomical environment has enormous potential for further therapeutic interventions, which is why we were so excited that the microrobot was able to find its way in this environment too,” Landers noted. “In vivo experiments conducted with an ovine model demonstrated the platform’s ability to operate within anatomically constrained regions of the central nervous system,” the investigators stated in their paper. “Furthermore, in a porcine model, all locomotion strategies were validated under clinical conditions, confirming precise microrobot navigation within the cerebrovascular system and highlighting the system’s compatibility with versatile in vivo environments.”

In addition to treating thrombosis, these new microrobots could also be used for localized infections or tumors. At every stage of development, the research team has remained focused on their goal, which is to ensure that everything they create is ready for use in operating theaters as soon as possible. The next goal is to look at human clinical trials. “The use of materials that have been FDA approved for other intravascular applications, coupled with the modular design of the robotic platform, should simplify translation and adaptability to a range of clinical workflows,” the authors concluded. Speaking about what motivates the whole team, Landers said, “Doctors are already doing an incredible job in hospitals. What drives us is the knowledge that we have a technology that enables us to help patients faster and more effectively and to give them new hope through innovative therapies.”

The Next Superintelligence Will Not Just Think. It Will Bleed

Biology needs the same kind of substrate. Without it, we are still guessing. With it, discovery starts to look predictable by design.

Drug development still leans on animal models and small patient cohorts to make billion-dollar bets. Those proxies teach us something, but they do not teach how a molecule behaves across the complexity of human biology. That is why nine out of ten drugs that succeed in animals fail in human clinical trials.

Biology needs an environment that gives intelligence the same systematic feedback that data centers gave to computation. That is what biological data centers provide. Robotic systems that sustain tens of thousands of standardized human tissues at once. Tissues that are vascularized and immune competent, clinically indistinguishable from patient biopsies under blinded review. Tissues that can be dosed, that bleed, that heal.

Robots trained with spatial dataset show improved object handling and awareness

When it comes to navigating their surroundings, machines have a natural disadvantage compared to humans. To help hone the visual perception abilities they need to understand the world, researchers have developed a novel training dataset for improving spatial awareness in robots.

In new research, experiments showed that robots trained with this dataset, called RoboSpatial, outperformed those trained with baseline models at the same robotic task, demonstrating a complex understanding of both spatial relationships and physical object manipulation.

For humans, shapes how we interact with the environment, from recognizing different people to maintaining an awareness of our body’s movements and position. Despite previous attempts to imbue robots with these skills, efforts have fallen short as most are trained on data that lacks sophisticated spatial understanding.

Novel 3D nanofabrication techniques enable miniaturized robots

In the 1980s when micro-electro-mechanical systems (MEMS) were first created, computer engineers were excited by the idea that these new devices that combine electrical and mechanical components at the microscale could be used to build miniature robots.

The idea of shrinking robotic mechanisms to such tiny sizes was particularly exciting given the potential to achieve exceptional performance in metrics such as speed and precision by leveraging a robot’s smaller size and mass. But making robots at smaller scales is easier said than done due to limitations in microscale 3D manufacturing.

Nearly 50 years later, Ph.D. students Steven Man and Sukjun Kim, working with Mechanical Engineering Professor Sarah Bergbreiter, have developed a 3D to build tiny Delta robots called microDeltas. Delta robots at larger scales (typically two to four feet in height) are used for picking, placing, and sorting tasks in manufacturing, packaging, and electronics assembly. The much smaller microDeltas have the potential for real-world applications in micromanipulation, micro assembly, minimally invasive surgeries, and wearable haptic devices.

/* */