Toggle light / dark theme

Inca Knots Inspire Quantum Computer

We think of data storage as a modern problem, but even ancient civilizations kept records. While much of the world used stone tablets or other media that didn’t survive the centuries, the Incas used something called quipu which encoded numeric data in strings using knots. Now the ancient system of recording numbers has inspired a new way to encode qubits in a quantum computer.

With quipu, knots in a string represent a number. By analogy, a conventional qubit would be as if you used a string to form a 0 or 1 shape on a tabletop. A breeze or other “noise” would easily disturb your equation. But knots stay tied even if you pick the strings up and move them around. The new qubits are the same, encoding data in the topology of the material.

In practice, Quantinuum’s H1 processor uses 10 ytterbium ions trapped by lasers pulsing in a Fibonacci sequence. If you consider a conventional qubit to be a one-dimensional affair — the qubit’s state — this new system acts like a two-dimensional system, where the second dimension is time. This is easier to construct than conventional 2D quantum structures but offers at least some of the same inherent error resilience.

Elon Musk — People Will Understand — Finally It’s Happening!

Explains why we can meet aliens soon. He is on to something. Elon Musk disagrees with the research that argues that there are not aliens,. Elon Musk explains why drake equation is important and why Fermi paradox is wrong.

SUBSCRIBE IF YOU LIKED THIS VIDEO
╔═╦╗╔╦╗╔═╦═╦╦╦╦╗╔═╗
║╚╣║║║╚╣╚╣╔╣╔╣║╚╣═╣
╠╗║╚╝║║╠╗║╚╣║║║║║═╣
╚═╩══╩═╩═╩═╩╝╚╩═╩═╝

Gate that Aliens weren’t able to overcome 👉 https://youtu.be/llBm-4IGI9k.

Elon Musk Destroys Apple 👉 https://youtu.be/MXIswmG5xyE

Elon Musk — “Delete Your Facebook” 👉 https://youtu.be/HA7bhpDaQ3Q

Elon Musk: I Will Tell You All about The Aliens: 👉 https://youtu.be/d8RBC3F2kC8

#58 Dr. Ben Goertzel — Artificial General Intelligence

Patreon: https://www.patreon.com/mlst.
Discord: https://discord.gg/ESrGqhf5CB

The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” — that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots.

Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field. Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben’s approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben’s work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented.

Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain.

Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar.

Pod version: https://anchor.fm/machinelearningstreettalk/episodes/58-Dr–…e-e15p20i.

DayDreamer: An algorithm to quickly teach robots new behaviors in the real world

Training robots to complete tasks in the real-world can be a very time-consuming process, which involves building a fast and efficient simulator, performing numerous trials on it, and then transferring the behaviors learned during these trials to the real world. In many cases, however, the performance achieved in simulations does not match the one attained in the real-world, due to unpredictable changes in the environment or task.

Researchers at the University of California, Berkeley (UC Berkeley) have recently developed DayDreamer, a tool that could be used to train robots to complete tasks more effectively. Their approach, introduced in a paper pre-published on arXiv, is based on learning models of the world that allow robots to predict the outcomes of their movements and actions, reducing the need for extensive trial and error training in the real-world.

“We wanted to build robots that continuously learn directly in the real world, without having to create a simulation environment,” Danijar Hafner, one of the researchers who carried out the study, told TechXplore. “We had only learned world models of video games before, so it was super exciting to see that the same algorithm allows robots to quickly learn in the real world, too!”

Team scripts breakthrough quantum algorithm

City College of New York physicist Pouyan Ghaemi and his research team are claiming significant progress in using quantum computers to study and predict how the state of a large number of interacting quantum particles evolves over time. This was done by developing a quantum algorithm that they run on an IBM quantum computer. “To the best of our knowledge, such particular quantum algorithm which can simulate how interacting quantum particles evolve over time has not been implemented before,” said Ghaemi, associate professor in CCNY’s Division of Science.

Entitled “Probing geometric excitations of fractional quantum Hall states on quantum computers,” the study appears in the journal of Physical Review Letters.

“Quantum mechanics is known to be the underlying mechanism governing the properties of elementary particles such as electrons,” said Ghaemi. “But unfortunately there is no easy way to use equations of quantum mechanics when we want to study the properties of large number of electrons that are also exerting force on each other due to their .”

Watch: 🤖 🤖 Will AI become an “existential threat?”

What does the future of AI look like? Let’s try out some AI software that’s readily available for consumers and see how it holds up against the human brain.

🦾 AI can outperform humans. But at what cost? 👉 👉 https://cybernews.com/editorial/ai-can-outperform-humans-but-at-what-cost/

Whether you welcome our new AI overlords with open arms, or you’re a little terrified about what an AI future may look like, many say it’s not really a question of ‘if,’ but more of a question of ‘when.’

Okay, you’ve got AI technologies on a small scale to a grand scale. From Siri — self-driving cars, text generators — humanoid robots, but what really is the real threat? As far back as 2013, Oxford University (ironically) used a machine-learning algorithm to determine whether 702 different jobs throughout America could turn automated, this found that a whopping 47% could in fact be replaced by machines.

A huge concern that comes alongside this is whether the technology will be reliable enough? We’re already seeing AI technology in countless professions, most recently the boom of AI generated-text used in over 300 different apps. It’s even used beyond this planet, out in space. If anything, this is a rude awakening for the future potential of AI technology, outside of the industrial market.

🦾 Do humans stand a chance against AI technology?

Machine Learning Paves Way for Smarter Particle Accelerators

Staff Scientist Daniele Filippetto working on the High Repetition-Rate Electron Scattering Apparatus. (Credit: Thor Swift/Berkeley Lab)

– By Will Ferguson

Scientists have developed a new machine-learning platform that makes the algorithms that control particle beams and lasers smarter than ever before. Their work could help lead to the development of new and improved particle accelerators that will help scientists unlock the secrets of the subatomic world.

Roboticists discover alternative physics

Energy, mass, velocity. These three variables make up Einstein’s iconic equation E=MC2. But how did Einstein know about these concepts in the first place? A precursor step to understanding physics is identifying relevant variables. Without the concept of energy, mass, and velocity, not even Einstein could discover relativity. But can such variables be discovered automatically? Doing so could greatly accelerate scientific discovery.

This is the question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe through a , then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science.

The researchers began by feeding the system raw video footage of phenomena for which they already knew the answer. For example, they fed a video of a swinging double pendulum known to have exactly four “state variables”—the angle and of each of the two arms. After a few hours of analysis, the AI produced the answer: 4.7.

Kinetic energy: Newton vs. Einstein | Who’s right?

Using Newtonian physics, physicists have found an expression for the value of kinetic energy, specifically KE = ½ m v^2. Einstein came up with a very different expression, specifically KE = (gamma – 1) m c^2. In this video, Fermilab’s Dr. Don Lincoln shows how these two equations are the same at low energy and how you get from one to the other.

Relativity playlist:

Fermilab physics 101:
https://www.fnal.gov/pub/science/particle-physics-101/index.html.

Fermilab home page:
https://fnal.gov

Protein sequence design by deep learning

The design of protein sequences that can precisely fold into pre-specified 3D structures is a challenging task. A recently proposed deep-learning algorithm improves such designs when compared with traditional, physics-based protein design approaches.

ABACUS-R is trained on the task of predicting the AA at a given residue, using information about that residue’s backbone structure, and the backbone and AA of neighboring residues in space. To do this, ABACUS-R uses the Transformer neural network architecture6, which offers flexibility in representing and integrating information between different residues. Although these aspects are similar to a previous network2, ABACUS-R adds auxiliary training tasks, such as predicting secondary structures, solvent exposure and sidechain torsion angles. These outputs aren’t needed during design but help with training and increase sequence recovery by about 6%. To design a protein sequence, ABACUS-R uses an iterative ‘denoising’ process (Fig.