Menu

Blog

Archive for the ‘mathematics’ category: Page 23

Apr 3, 2024

Quantum Leap: Redefining Complex Problem-Solving

Posted by in categories: computing, mathematics, particle physics, quantum physics

The traveling salesman problem is considered a prime example of a combinatorial optimization problem. Now a Berlin team led by theoretical physicist Prof. Dr. Jens Eisert of Freie Universität Berlin and HZB has shown that a certain class of such problems can actually be solved better and much faster with quantum computers than with conventional methods.

Quantum computers use so-called qubits, which are not either zero or one as in conventional logic circuits, but can take on any value in between. These qubits are realized by highly cooled atoms, ions, or superconducting circuits, and it is still physically very complex to build a quantum computer with many qubits. However, mathematical methods can already be used to explore what fault-tolerant quantum computers could achieve in the future.

“There are a lot of myths about it, and sometimes a certain amount of hot air and hype. But we have approached the issue rigorously, using mathematical methods, and delivered solid results on the subject. Above all, we have clarified in what sense there can be any advantages at all,” says Prof. Dr. Jens Eisert, who heads a joint research group at Freie Universität Berlin and Helmholtz-Zentrum Berlin.

Mar 31, 2024

RLHF: Reinforcement Learning from Human Feedback

Posted by in categories: mathematics, robotics/AI

Despite being almost a year old, this blog by Chip Huyen is still a great read for getting into fine-tuning LLMs.

This article covers everything you need to know about Reinforcement Learning from Human Feedback (RLHF).

Continue reading “RLHF: Reinforcement Learning from Human Feedback” »

Mar 30, 2024

What is quantum cognition, and how is it applied to psychology?

Posted by in categories: computing, mathematics, neuroscience, quantum physics

Quantum cognition is a new research program that uses mathematical principles from quantum theory as a framework to explain human cognition, including judgment and decision making, concepts, reasoning, memory, and perception. This research is not concerned with whether the brain is a quantum computer. Instead, it uses quantum theory as a fresh conceptual framework and a coherent set of formal tools for explaining puzzling empirical findings in psychology. In this introduction, we focus on two quantum principles as examples to show why quantum cognition is an appealing new theoretical direction for psychology: complementarity, which suggests that some psychological measures have to be made sequentially and that the context generated by the first measure can influence responses to the next one, producing measurement order effects, and superposition, which suggests that some psychological states cannot be defined with respect to definite values but, instead, that all possible values within the superposition have some potential for being expressed. We present evidence showing how these two principles work together to provide a coherent explanation for many divergent and puzzling phenomena in psychology. (PsycInfo Database Record © 2020 APA, all rights reserved)

Mar 25, 2024

Microsoft’s Small Language Model Outperforms Larger Models on Standardized Math tests

Posted by in categories: education, mathematics, robotics/AI

A small team of AI researchers at Microsoft reports that the company’s Orca-Math small language model outperforms other, larger models on standardized math tests. The group has published a paper on the arXiv preprint server describing their testing of Orca-Math on the Grade School Math 8K (GSM8K) benchmark and how it fared compared to well-known LLMs.

Many popular LLMs such as ChatGPT are known for their impressive conversational skills—less well known is that most of them can also solve math word problems. AI researchers have tested their abilities at such tasks by pitting them against the GSM8K, a dataset of 8,500 grade-school math word problems that require multistep reasoning to solve, along with their correct answers.

In this new study, the research team at Microsoft tested Orca-Math, an AI application developed by another team at Microsoft specifically designed to tackle math word problems, and compared the results with larger AI models.

Mar 24, 2024

God’s Number Revealed: 20 Moves Proven Enough to Solve Any Rubik’s Cube Position

Posted by in categories: alien life, computing, information science, mathematics

Year 2010 😗😁


The world has waited with bated breath for three decades, and now finally a group of academics, engineers, and math geeks has discovered the number that explains life, the universe, and everything. That number is 20, and it’s the maximum number of moves it takes to solve a Rubik’s Cube.

Known as God’s Number, the magic number required about 35 CPU-years and a good deal of man-hours to solve. Why? Because there’s-1 possible positions of the cube, and the computer algorithm that finally cracked God’s Algorithm had to solve them all. (The terms God’s Number/Algorithm are derived from the fact that if God was solving a Cube, he/she/it would do it in the most efficient way possible. The Creator did not endorse this study, and could not be reached for comment.)

Continue reading “God’s Number Revealed: 20 Moves Proven Enough to Solve Any Rubik’s Cube Position” »

Mar 24, 2024

Scalable Optimal Transport Methods in Machine Learning: A Contemporary Survey

Posted by in categories: mathematics, robotics/AI

Nice figures in this newly published survey on Scaled Optimal Transport with 200+ references.

👉


Optimal Transport (OT) is a mathematical framework that first emerged in the eighteenth century and has led to a plethora of methods for answering many theoretical and applied questions. The last decade has been a witness to the remarkable contributions of this classical optimization problem to machine learning. This paper is about where and how optimal transport is used in machine learning with a focus on the question of scalable optimal transport. We provide a comprehensive survey of optimal transport while ensuring an accessible presentation as permitted by the nature of the topic and the context. First, we explain the optimal transport background and introduce different flavors (i.e. mathematical formulations), properties, and notable applications.

Mar 24, 2024

Emmy Noether’s revolutionary idea explained for anyone, from kindergarteners to PhDs

Posted by in categories: mathematics, physics

A century ago, Emmy Noether published a theorem that would change mathematics and physics. Here’s an all-ages guided tour through this groundbreaking idea.

Mar 22, 2024

Freezing Point Phenomena: Unlocking the Strange Secrets of Ice Nucleation

Posted by in categories: geoengineering, mathematics

Research unveils a mathematical model for ice nucleation, showing how surface angles affect water’s freezing point, with applications in snowmaking and cloud seeding.

From abstract-looking cloud formations to roars of snow machines on ski slopes, the transformation of liquid water into solid ice touches many facets of life. Water’s freezing point is generally accepted to be 32 degrees Fahrenheit. But that is due to ice nucleation — impurities in everyday water raise its freezing point to this temperature. Now, researchers unveil a theoretical model that shows how specific structural details on surfaces can influence water’s freezing point.

Continue reading “Freezing Point Phenomena: Unlocking the Strange Secrets of Ice Nucleation” »

Mar 21, 2024

How Chain-of-Thought Reasoning Helps Neural Networks Compute

Posted by in categories: mathematics, robotics/AI

“They remove some of the magic,” said Dimitris Papailiopoulos, a machine learning researcher at the University of Wisconsin, Madison. “That’s a good thing.”

Training Transformers

Large language models are built around mathematical structures called artificial neural networks. The many “neurons” inside these networks perform simple mathematical operations on long strings of numbers representing individual words, transmuting each word that passes through the network into another. The details of this mathematical alchemy depend on another set of numbers called the network’s parameters, which quantify the strength of the connections between neurons.

Mar 21, 2024

Researchers gave AI an ‘inner monologue’ and it massively improved its performance

Posted by in categories: mathematics, robotics/AI

Scientists trained an AI system to think before speaking with a technique called QuietSTaR. The inner monologue improved common sense reasoning and doubled math performance.

Page 23 of 155First2021222324252627Last