Menu

Blog

Archive for the ‘information science’ category: Page 54

Oct 10, 2023

Possible Quantum Decryption Breakthrough

Posted by in categories: information science, quantum physics

Researcher show that n-bit integers can be factorized by independently running a quantum circuit with orders of magnitude fewer qubits many times. It then use polynomial-time classical post-processing. The correctness of the algorithm relies on a number-theoretic heuristic assumption reminiscent of those used in subexponential classical factorization algorithms. It is currently not clear if the algorithm can lead to improved physical implementations in practice.

Shor’s celebrated algorithm allows to factorize n-bit integers using a quantum circuit of size O(n^2). For factoring to be feasible in practice, however, it is desirable to reduce this number further. Indeed, all else being equal, the fewer quantum gates there are in a circuit, the likelier it is that it can be implemented without noise and decoherence destroying the quantum effects.

The new algorithm can be thought of as a multidimensional analogue of Shor’s algorithm. At the core of the algorithm is a quantum procedure.

Oct 9, 2023

Welcome to the AI gym staffed by virtual trainers

Posted by in categories: biotech/medical, food, information science, media & arts, mobile phones, robotics/AI

Each member works out within a designated station facing wall-to-wall LED screens. These tall screens mask sensors that track both the motions of the exerciser and the gym’s specially built equipment, including dumbbells, medicine balls, and skipping ropes, using a combination of algorithms and machine-learning models.

Once members arrive for a workout, they’re given the opportunity to pick their AI coach through the gym’s smartphone app. The choice depends on whether they feel more motivated by a male or female voice and a stricter, more cheerful, or laid-back demeanor, although they can switch their coach at any point. The trainers’ audio advice is delivered over headphones and accompanied by the member’s choice of music, such as rock or country.

Although each class at the Las Colinas studio is currently observed by a fitness professional, that supervisor doesn’t need to be a trainer, says Brandon Bean, cofounder of Lumin Fitness. “We liken it to being more like an airline attendant than an actual coach,” he says. “You want someone there if something goes wrong, but the AI trainer is the one giving form feedback, doing the motivation, and explaining how to do the movements.”

Oct 9, 2023

Google DeepMind Researchers Introduce Promptbreeder: A Self-Referential and Self-Improving AI System that can Automatically Evolve Effective Domain-Specific Prompts in a Given Domain

Posted by in categories: information science, robotics/AI

Large Language Models (LLMs) have gained a lot of attention for their human-imitating properties. These models are capable of answering questions, generating content, summarizing long textual paragraphs, and whatnot. Prompts are essential for improving the performance of LLMs like GPT-3.5 and GPT-4. The way that prompts are created can have a big impact on an LLM’s abilities in a variety of areas, including reasoning, multimodal processing, tool use, and more. These techniques, which researchers designed, have shown promise in tasks like model distillation and agent behavior simulation.

The manual engineering of prompt approaches raises the question of whether this procedure can be automated. By producing a set of prompts based on input-output instances from a dataset, Automatic Prompt Engineer (APE) made an attempt to address this, but APE had diminishing returns in terms of prompt quality. Researchers have suggested a method based on a diversity-maintaining evolutionary algorithm for self-referential self-improvement of prompts for LLMs to overcome decreasing returns in prompt creation.

LLMs can alter their prompts to improve their capabilities, just as a neural network can change its weight matrix to improve performance. According to this comparison, LLMs may be created to enhance both their own capabilities and the processes by which they enhance them, thereby enabling Artificial Intelligence to continue improving indefinitely. In response to these ideas, a team of researchers from Google DeepMind has introduced PromptBreeder (PB) in recent research, which is a technique for LLMs to better themselves in a self-referential manner.

Oct 9, 2023

Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution (Paper Explained)

Posted by in categories: engineering, evolution, information science

#evolution.

Promptbreeder is a self-improving self-referential system for automated prompt engineering. Give it a task description and a dataset, and it will automatically come up with appropriate prompts for the task. This is achieved by an evolutionary algorithm where not only the prompts, but also the mutation-prompts are improved over time in a population-based, diversity-focused approach.

Continue reading “Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution (Paper Explained)” »

Oct 8, 2023

Meta-Learning Machines in a Single Lifelong Trial

Posted by in categories: information science, physics, robotics/AI

The most widely used machine learning algorithms were designed by humans and thus are hindered by our cognitive biases and limitations. Can we also construct meta-learning algorithms that can learn better learning algorithms so that our self-improving AIs have no limits other than those inherited from computability and physics? This question has been a main driver of my research since I wrote a thesis on it in 1987. In the past decade, it has become a driver of many other people’s research as well. Here I summarize our work starting in 1994 on meta-reinforcement learning with self-modifying policies in a single lifelong trial, and — since 2003 — mathematically optimal meta-learning through the self-referential Gödel Machine. This talk was previously presented at meta-learning workshops at ICML 2020 and NeurIPS 2021. Many additional publications on meta-learning can be found at https://people.idsia.ch/~juergen/metalearning.html.

Jürgen Schmidhuber.
Director, AI Initiative, KAUST
Scientific Director of the Swiss AI Lab IDSIA
Co-Founder & Chief Scientist, NNAISENSE
http://www.idsia.ch/~juergen/blog.html.

Continue reading “Meta-Learning Machines in a Single Lifelong Trial” »

Oct 8, 2023

AI: Why companies need to build algorithmic governance ahead of the law

Posted by in categories: governance, information science, law, robotics/AI

Algorithmic governance covers the rules and practices for the construction and use of algorithms embedded in AI technologies. But how should these be applied?

Oct 7, 2023

Paralyzed woman able to speak again, thanks to brain-avatar interface

Posted by in categories: biotech/medical, information science, neuroscience

Speech BCIs that use brain implants and algorithms to translate brain signals into text are changing the lives of people with paralysis.

Oct 7, 2023

AI predicts 70% of earthquakes a week before they occur

Posted by in categories: information science, robotics/AI

The system only flagged eight false warnings and missed one earthquake.

High precision and accuracy in earthquake prediction continues to be a key scientific challenge, and artificial intelligence (AI) has been investigated as a technique to enhance our capabilities in this crucial area.

This is because AI can analyze large datasets of seismic activity and identify patterns or anomalies that human analysts might miss. Machine learning algorithms can thus help researchers understand earthquake patterns better.

Oct 7, 2023

Stanford introduces autonomous robot dogs with AI brains

Posted by in categories: information science, robotics/AI

There’s a new kind of robot dog in town and it gets its prowess from an artificial intelligence (AI) algorithm.

An AI algorithm for a brain

The new vision-based algorithm, according to AI researchers at Stanford University and Shanghai Qi Zhi Institute who lead these efforts, enables the robodogs to scale tall objects, jump across gaps, crawl under low-hanging structures, and squeeze between cracks. This is because the robodog’s algorithm serves as its brain.

Oct 6, 2023

Researchers create a neural network for genomics that explains how it achieves accurate predictions

Posted by in categories: biotech/medical, information science, robotics/AI

A team of New York University computer scientists has created a neural network that can explain how it reaches its predictions. The work reveals what accounts for the functionality of neural networks—the engines that drive artificial intelligence and machine learning—thereby illuminating a process that has largely been concealed from users.

The breakthrough centers on a specific usage of that has become popular in recent years—tackling challenging biological questions. Among these are examinations of the intricacies of RNA splicing—the focal point of the study—which plays a role in transferring information from DNA to functional RNA and protein products.

“Many neural networks are —these algorithms cannot explain how they work, raising concerns about their trustworthiness and stifling progress into understanding the underlying biological processes of genome encoding,” says Oded Regev, a computer science professor at NYU’s Courant Institute of Mathematical Sciences and the senior author of the paper, which was published in the Proceedings of the National Academy of Sciences.

Page 54 of 319First5152535455565758Last