Menu

Blog

Archive for the ‘mathematics’ category

May 18, 2024

A New Dimension of Quantum Materials: Topological Phonons Discovered in Crystal Lattices

Posted by in categories: mathematics, particle physics, quantum physics, space

An international research team has shown that phonons, the quantum particles behind material vibrations, can be classified using topology, much like electronic bands in materials. This breakthrough could lead to the development of new materials with unique thermal, electrical, and mechanical properties, enhancing our understanding and manipulation of solid-state physics.

An international group of researchers has found that quantum particles, which play a key role in the vibrations of materials affecting their stability and other characteristics, can be classified through topology. Known as phonons, these particles represent the collective vibrational patterns of atoms within a crystal structure. They create disturbances that spread like waves to nearby atoms. Phonons are crucial for several properties of solids, such as thermal and electrical conductivity, neutron scattering, and quantum states including charge density waves and superconductivity.

The spectrum of phonons—essentially the energy as a function of momentum—and their wave functions, which represent their probability distribution in real space, can be computed using ab initio first principle codes. However, these calculations have so far lacked a unifying principle. For the quantum behavior of electrons, topology—a branch of mathematics—has successfully classified the electronic bands in materials. This classification shows that materials, which might seem different, are actually very similar.

May 18, 2024

A History of Western Philosophy

Posted by in categories: materials, mathematics

“Mathematics, rightly viewed, possesses not only truth, but supreme beauty — a beauty cold and austere, like that of sculpture.”

- Bertrand Russell (1972 — 1970) A History of Western Philosophy

https://mathshistory.st-andrews.ac.uk/Biographies/Russell/

Continue reading “A History of Western Philosophy” »

May 17, 2024

Tracing the history of perturbative expansion in quantum field theory

Posted by in categories: mathematics, particle physics, quantum physics

Perturbative expansion is a valuable mathematical technique which is widely used to break down descriptions of complex quantum systems into simpler, more manageable parts. Perhaps most importantly, it has enabled the development of quantum field theory (QFT): a theoretical framework that combines principles from classical, quantum, and relativistic physics, and serves as the foundation of the Standard Model of particle physics.

May 17, 2024

Artificial Intelligence Will Defeat CAPTCHA — How Will We Prove We’re Human Then?

Posted by in categories: information science, internet, mathematics, robotics/AI

If you use the web for more than just browsing (that’s pretty much everyone), chances are you’ve had your fair share of “CAPTCHA rage,” the frustration stemming from trying to discern a marginally legible string of letters aimed at verifying that you are a human. CAPTCHA, which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart,” was introduced to the Internet a decade ago and has seen widespread adoption in various forms — whether using letters, sounds, math equations, or images — even as complaints about their use continue.

A large-scale Stanford study a few years ago concluded that “CAPTCHAs are often difficult for humans.” It has also been reported that around 1 in 5 visitors will leave a website rather than complete a CAPTCHA.

Continue reading “Artificial Intelligence Will Defeat CAPTCHA — How Will We Prove We’re Human Then?” »

May 16, 2024

Harmonics of Learning: A Mathematical Theory for the Rise of Fourier Features in Learning Systems Like Neural Networks

Posted by in categories: biological, mathematics, robotics/AI

Artificial neural networks (ANNs) show a remarkable pattern when trained on natural data irrespective of exact initialization, dataset, or training objective; models trained on the same data domain converge to similar learned patterns. For example, for different image models, the initial layer weights tend to converge to Gabor filters and color-contrast detectors. Many such features suggest global representation that goes beyond biological and artificial systems, and these features are observed in the visual cortex. These findings are practical and well-established in the field of machines that can interpret literature but lack theoretical explanations.

Localized versions of canonical 2D Fourier basis functions are the most observed universal features in image models, e.g. Gabor filters or wavelets. When vision models are trained on tasks like efficient coding, classification, temporal coherence, and next-step prediction goals, these Fourier features pop up in the model’s initial layers. Apart from this, Non-localized Fourier features have been observed in networks trained to solve tasks where cyclic wraparound is allowed, for example, modular arithmetic, more general group compositions, or invariance to the group of cyclic translations.

Researchers from KTH, Redwood Center for Theoretical Neuroscience, and UC Santa Barbara introduced a mathematical explanation for the rise of Fourier features in learning systems like neural networks. This rise is due to the downstream invariance of the learner that becomes insensitive to certain transformations, e.g., planar translation or rotation. The team has derived theoretical guarantees regarding Fourier features in invariant learners that can be used in different machine-learning models. This derivation is based on the concept that invariance is a fundamental bias that can be injected implicitly and sometimes explicitly into learning systems due to the symmetries in natural data.

May 15, 2024

Why mathematics is set to be revolutionized by AI

Posted by in categories: mathematics, robotics/AI

Cheap data and the absence of coincidences make maths an ideal testing ground for AI-assisted discovery — but only humans will be able to tell good conjectures from bad ones.

May 15, 2024

AI-powered tutor Khanmigo by Khan Academy: Your 24/7 homework helper

Posted by in categories: mathematics, robotics/AI

Did you hear the news? OpenAI’s newest model can reason across audio, vision, and text in real time.

How does GPT-4o do with math tutoring? 🤔

Sal and his son test it out on a Khan Academy math problem.

Continue reading “AI-powered tutor Khanmigo by Khan Academy: Your 24/7 homework helper” »

May 14, 2024

OpenAI GPT-4o math with Sal and Imran Khan from Khan Academy

Posted by in categories: mathematics, robotics/AI

This is “OpenAI GPT-4o math with Sal and Imran Khan from Khan Academy” by OpenAI on Vimeo, the home for high quality videos and the people who love them.

May 11, 2024

Scientists uncover quantum-inspired vulnerabilities in neural networks: the role of conjugate variables in system attacks

Posted by in categories: mathematics, quantum physics, robotics/AI

In a recent study merging the fields of quantum physics and computer science, Dr. Jun-Jie Zhang and Prof. Deyu Meng have explored the vulnerabilities of neural networks through the lens of the uncertainty principle in physics. Their work, published in the National Science Review, draws a parallel between the susceptibility of neural networks to targeted attacks and the limitations imposed by the uncertainty principle—a well-established theory in quantum physics that highlights the challenges of measuring certain pairs of properties simultaneously.

The researchers’ quantum-inspired analysis of neural network vulnerabilities suggests that adversarial attacks leverage the trade-off between the precision of input features and their computed gradients. “When considering the architecture of deep neural networks, which involve a loss function for learning, we can always define a conjugate variable for the inputs by determining the gradient of the loss function with respect to those inputs,” stated in the paper by Dr. Jun-Jie Zhang, whose expertise lies in mathematical physics.

This research is hopeful to prompt a reevaluation of the assumed robustness of neural networks and encourage a deeper comprehension of their limitations. By subjecting a neural network model to adversarial attacks, Dr. Zhang and Prof. Meng observed a compromise between the model’s accuracy and its resilience.

May 11, 2024

Interview with Gabriele Scheler: Neuro AI. Will it be the future?

Posted by in categories: mathematics, neuroscience, robotics/AI

Here is an interview concerning the current AI and generative AI waves, and their relation to neuroscience. We propose solutions based on new technology from neuroAI – which includes humans ability for reasoning, thought, logic, mathematics, proof etc. – and are therefore poorly modeled by data analysis on its own. Some of our work – also with scholars – has been published, while more is to come in a spin-off setting.

Page 1 of 13812345678Last