Toggle light / dark theme

RisingAttacK: New technique can make AI ‘see’ whatever you want

Researchers have demonstrated a new way of attacking artificial intelligence computer vision systems, allowing them to control what the AI “sees.” The research shows that the new technique, called RisingAttacK, is effective at manipulating all of the most widely used AI computer vision systems.

At issue are so-called “adversarial attacks,” in which someone manipulates the data being fed into an AI system to control what the system sees, or does not see, in an image. For example, someone might manipulate an AI’s ability to detect , pedestrians or other cars—which would cause problems for . Or a hacker could install code on an X-ray machine that causes an AI system to make inaccurate diagnoses.

“We wanted to find an effective way of hacking AI vision systems because these vision systems are often used in contexts that can affect human health and safety—from autonomous vehicles to health technologies to ,” says Tianfu Wu, co-corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University.

Tesla’s JUICY New Impact Report (highlights in 10 mins!)

Tesla’s 2024 impact report highlights the company’s progress in accelerating its mission to sustainable energy through innovative technologies, including autonomy, AI, and reduced emissions, with a focus on expanding its ecosystem and making sustainable transportation and energy solutions more accessible ## ## Questions to inspire discussion.

Sustainable Transportation.

🚗 Q: How will Tesla’s robo taxi network impact transportation?

A: Tesla’s autopilot-powered robo taxi network will be far safer than human drivers, lower emissions, and increase accessibility of sustainable transportation, improving city sustainability and accelerating Tesla’s mission.

🏙️ Q: What are the benefits of Tesla vehicles compared to other options?

A: Tesla vehicles offer premium features rivaling luxury cars while maintaining a total cost of ownership comparable to mass market vehicles, providing significantly more value at a similar price point.

Tiny light-sensitive magnetic robots can clear up bacterial infections in sinuses

Tiny magnetic bots that are activated by light can clear bacterial infections deep in the sinus cavities, then be expelled by blowing out the nose.

A new study published in Science Robotics unveiled copper single–atom–doped bismuth oxoiodide microbots, each smaller than a grain of salt, that can be tracked and guided to the location of infection via X-ray imaging, thus providing a precise, minimally invasive therapeutic strategy for managing clinically.

Sinusitis is a common respiratory condition often linked to biofilm produced by bacteria like Streptococcus pyogenes. This condition causes inflammation of the sinus lining and leads to symptoms such as , reduced sense of smell, facial pain, and, in some dire cases, even memory impairment.

Mathematical approach makes uncertainty in AI quantifiable

How reliable is artificial intelligence, really? An interdisciplinary research team at TU Wien has developed a method that allows for the exact calculation of how reliably a neural network operates within a defined input domain. In other words: It is now possible to mathematically guarantee that certain types of errors will not occur—a crucial step forward for the safe use of AI in sensitive applications.

From smartphones to self-driving cars, AI systems have become an everyday part of our lives. But in applications where safety is critical, one central question arises: Can we guarantee that an AI system won’t make serious mistakes—even when its input varies slightly?

A team from TU Wien—Dr. Andrey Kofnov, Dr. Daniel Kapla, Prof. Efstathia Bura and Prof. Ezio Bartocci—bringing together experts from mathematics, statistics and computer science, has now found a way to analyze neural networks, the brains of AI systems, in such a way that the possible range of outputs can be exactly determined for a given input range—and specific errors can be ruled out with certainty.

A machine-learning–powered spectral-dominant multimodal soft wearable system for long-term and early-stage diagnosis of plant stresses

MapS-Wear, a soft plant wearable, enables precise, in situ, and early-stage stress diagnosis to boost crop yield and quality.

The Path to Medical Superintelligence

Microsoft says it has developed an AI system that creates a ‘path to medical superintelligence’ that can deal with ‘diagnostically complex and intellectually demanding’ cases and diagnose disease four times more accurately than a panel of human doctors.

[ https://microsoft.ai/wp-content/uploads/2025/06/MAI-Dx-Orche…0x1498.jpg https://microsoft.ai/new/the-path-to-medical-superintelligence/

[ https://arxiv.org/abs/2506.22405](https://arxiv.org/abs/2506.

“Benchmarked against real-world case records published each week in the New England Journal of Medicine, we show that the Microsoft AI Diagnostic Orchestrator (MAI-DxO) correctly diagnoses up to 85% of NEJM case proceedings, a rate more than four times higher than a group of experienced physicians. MAI-DxO also gets to the correct diagnosis more cost-effectively than physicians.”

AI that thinks like a doctor: a new era in medical diagnosis.

Imagine walking into a doctor’s office with a strange set of symptoms. Rather than jumping to conclusions, the doctor carefully asks questions, orders tests, and adjusts their thinking at every step based on what they learn. This back-and-forth process—called sequential diagnosis—is what real-world medicine is all about. But most AI systems haven’t been tested this way. Until now.

A new benchmark called Sequential Diagnosis is flipping the script.

Why human empathy still matters in the age of AI

A new international study finds that people place greater emotional value on empathy they believe comes from humans—even when the exact same response is generated by artificial intelligence.

Published in Nature Human Behaviour, the study involved over 6,000 participants across nine experiments.

The researchers, led by Prof. Anat Perry from the Hebrew University of Jerusalem and her Ph.D. student Matan Rubin, in collaboration with Prof. Amit Goldenberg, with researchers from Harvard University and Prof. Desmond C. Ong, from the University of Texas, tested whether people perceived empathy differently depending on whether it was labeled as coming from a human or from an AI chatbot.