Toggle light / dark theme

New soiling detection method based on drones, AI, image processing

“Compared with other traditional methods, the proposed has lower computational complexity, faster operation speed, weak influence of light, and strong ability to locate dirt,” the research group said. “The improved path planning algorithm used in this study greatly improves the efficiency of UAV inspection, saves time and resources, reduces operation and maintenance costs, and improves the corresponding operation and maintenance level of photovoltaic power generation.”

The novel approach uses mathematical morphologies for image processing, such as image enhancement, sharpening, filtering, and closing operations. It also uses image histogram equalization and edge detection, among other methods, to find the dusted spot. For path optimization, it uses an improved version of the A (A-star) algorithm.

Quantitative Justice: Using Data Science for Good

By Ariana Mendible

For the past several years, I have been closely involved with the Institute for the Quantitative Study of Inclusion, Diversity and Equity (QSIDE). This nonprofit organizes events and facilitates research in quantitative justice, the application of data and mathematical sciences to quantify, analyze and address social injustice. It uses the community-based participatory action research model to connect like-minded scholars, community partners, and activists together. Recently, QSIDE researchers met virtually in a Research Roundup to share our progress. Hearing all the incredible work that QSIDE has spawned and supported prompted me to reflect on the role that the group has played in my budding career and the ways in which the institute itself has grown since its founding in 2019.

Like many PhD candidates, my final year of graduate school was rife with burnout and uncertainty about post-graduation plans. Add to this mix a global pandemic, social isolation, and confinement to the same one-bedroom dwelling for the last year plus and you get a stew of anxiety. I was approaching my mental limit on the research I had been conducting, somewhere at the intersection of data science and fluid dynamics. While the problem I had been working on for my thesis was interesting, I was ready for a major change. I couldn’t picture myself in the usual post-graduate tracks: a post-doc at an R1 institution or working for a Big Tech company. These careers felt hyper-competitive, a turn-off during a period of significant burnout. I also couldn’t see their direct positive impact, which felt acutely important in this time of global social disarray.

Efficiently improving the performance of noisy quantum computers

Samuele Ferracin1,2, Akel Hashim3,4, Jean-Loup Ville3, Ravi Naik3,4, Arnaud Carignan-Dugas1, Hammam Qassim1, Alexis Morvan3,4, David I. Santiago3,4, Irfan Siddiqi3,4,5, and Joel J. Wallman1,2

1Keysight Technologies Canada, Kanata, ON K2K 2W5, Canada 2 Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada 3 Quantum Nanoelectronics Laboratory, Dept. of Physics, University of California at Berkeley, Berkeley, CA 94,720, USA 4 Applied Math and Computational Research Division, Lawrence Berkeley National Lab, Berkeley, CA 94,720, USA 5 Materials Sciences Division, Lawrence Berkeley National Lab, Berkeley, CA 94,720, USA

Get full text pdfRead on arXiv Vanity.

The real long-term dangers of AI

Read & tell me what you think 🙂


There is a rift between near and long-term perspectives on AI safety – one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the Longtermists of obsessing on Terminator-style scenarios in concert with Big Tech to distract regulators from more pressing issues like data privacy. In this essay, Mark Bailey and Susan Schneider argue that we shouldn’t be fighting about the Terminator, we should be focusing on the harm to the mind itself – to our very freedom to think.

There has been a growing debate between near and long-term perspectives on AI safety – one that has stirred controversy. “Longtermists” have been accused of being co-opted by Big Tech and fixating on science fiction-like Terminator-style scenarios to distract regulators from the real, more near-term, issues, such as algorithmic bias and data privacy.

Longtermism is an ethical theory that requires us to consider the effects of today’s decisions on all of humanity’s potential futures. It can lead to extremes, as it concludes that one should sacrifice the present wellbeing of humanity for the good of humanity’s potential futures. Many Longtermists believe humans will ultimately lose control of AI, as it will become “superintelligent”, outthinking humans in every domain – social acumen, mathematical abilities, strategic thinking, and more.

The Mathematics of Reliable Artificial Intelligence

By Gitta Kutyniok

The recent unprecedented success of foundation models like GPT-4 has heightened the general public’s awareness of artificial intelligence (AI) and inspired vivid discussion about its associated possibilities and threats. In March 2023, a group of technology leaders published an open letter that called for a public pause in AI development to allow time for the creation and implementation of shared safety protocols. Policymakers around the world have also responded to rapid advancements in AI technology with various regulatory efforts, including the European Union (EU) AI Act and the Hiroshima AI Process.

One of the current problems—and consequential dangers—of AI technology is its unreliability and subsequent lack of trustworthiness. In recent years, AI-based technologies have often encountered severe issues in terms of safety, security, privacy, and responsibility with respect to fairness and interpretability. Privacy violations, unfair decisions, unexplainable results, and accidents involving self-driving cars are all examples of concerning outcomes.

Scientists achieve first intercity quantum key distribution with deterministic single-photon source

Conventional encryption methods rely on complex mathematical algorithms and the limits of current computing power. However, with the rise of quantum computers, these methods are becoming increasingly vulnerable, necessitating quantum key distribution (QKD).

QKD is a technology that leverages the unique properties of quantum physics to secure data transmission. This method has been continuously optimized over the years, but establishing large networks has been challenging due to the limitations of existing quantum light sources.

In a new article published in Light: Science & Applications, a team of scientists in Germany have achieved the first intercity QKD experiment with a deterministic single-photon source, revolutionizing how we protect our confidential information from cyber threats.

The Biggest Problem in Mathematics Is Finally a Step Closer to Being Solved

Number theorists have been trying to prove a conjecture about the distribution of prime numbers for more than 160 years.

By Manon Bischoff

The Riemann hypothesis is the most important open question in number theory—if not all of mathematics. It has occupied experts for more than 160 years. And the problem appeared both in mathematician David Hilbert’s groundbreaking speech from 1900 and among the “Millennium Problems” formulated a century later. The person who solves it will win a million-dollar prize.

/* */