Menu

Blog

Archive for the ‘information science’ category: Page 106

Dec 1, 2022

New AI-enabled study unravels the principles of aging

Posted by in categories: biotech/medical, information science, life extension, robotics/AI

New work from Gero, conducted in collaboration with researchers from Roswell Park Comprehensive Cancer Center and Genome Protection Inc. and published in Nature Communications, demonstrates the power of AI combined with analytical tools borrowed from the physics of complex systems to provide insights into the nature of aging, resilience and future medical interventions for age-related diseases including cancer.

Longevity. Technology: Modern AI systems exhibit superhuman-level performance in medical diagnostics applications, such as identifying cancer on MRI scans. This time, the researchers took one step further and used AI to figure out principles that describe how the biological process of aging unfolds in time.

The researchers trained an AI algorithm on a large dataset composed of multiple blood tests taken along the life course of tens of thousands of aging mice to predict the future health state of an animal from its current state. The artificial neural network precisely projected the health condition of an aging mouse with the help of a single variable, which was termed dynamic frailty indicator (dFI) that accurately characterises the damage that an animal accumulates throughout life [1].

Dec 1, 2022

We built an algorithm that predicts the length of court sentences — could AI play a role in the justice system?

Posted by in categories: information science, law, robotics/AI

Artificial intelligence could help create transparency and consistency in the legal system – our model shows how.

Nov 30, 2022

In reinforcement learning, slower networks can learn faster

Posted by in categories: entertainment, information science

We then tested the new algorithms, called DQN with Proximal updates (or DQN Pro) and Rainbow Pro on a standard set of 55 Atari games. We can see from the graph of the results that the Pro agents overperform their counterparts; the basic DQN agent is able to obtain human-level performance after 120 million interactions with the environment (frames); and Rainbow Pro achieves a 40% relative improvement over the original Rainbow agent.

Further, to ensure that proximal updates do in fact result in smoother and slower parameter changes, we measure the norm differences between consecutive DQN solutions. We expect the magnitude of our updates to be smaller when using proximal updates. In the graphs below, we confirm this expectation on the four different Atari games tested.

Overall, our empirical and theoretical results support the claim that when optimizing for a new solution in deep RL, it is beneficial for the optimizer to gravitate toward the previous solution. More importantly, we see that simple improvements in deep-RL optimization can lead to significant positive gains in the agent’s performance. We take this as evidence that further exploration of optimization algorithms in deep RL would be fruitful.

Nov 30, 2022

This Artificial Intelligence (AI) Model Knows How to Detect Novel Objects During Object Detection

Posted by in categories: climatology, information science, robotics/AI

Object detection has been an important task in the computer vision domain in recent decades. The goal is to detect instances of objects, such as humans, cars, etc., in digital images. Hundreds of methods have been developed to answer a single question: What objects are where?

Traditional methods tried to answer this question by extracting hand-crafted features like edges and corners within the image. Most of these approaches used a sliding-window approach, meaning that they kept checking small parts of the image in different scales to see if any of these parts contained the object they were looking for. This was really time-consuming, and even the slightest change in the object shape, lightning, etc., could have caused the algorithm to miss it.

Then there came the deep learning era. With the increasing capability of computer hardware and the introduction of large-scale datasets, it became possible to exploit the advancement in the deep learning domain to develop a reliable and robust object detection algorithm that could work in an end-to-end manner.

Nov 29, 2022

Quantum Annealing Pioneer D-Wave Introduces Expanded Hybrid Solver

Posted by in categories: computing, information science, quantum physics

D-Wave Systems, a pioneer in quantum annealing-based computing, today announced significant upgrades to its constrained quadratic model (CQM) hybrid solver that should make it easier to use and able to tackle much larger problems, said the company. The model can now handle optimization problems with up to 1 million variables (including continuous variables) and 100,000 constraints. In addition, D-Wave has introduced a “new [pre-solver] set of fast classical algorithms that reduces the size of the problem and allows for larger models to be submitted to the hybrid solver.”

While talk of using hybrid quantum-classical solutions has intensified recently among the gate-based quantum computer developer community, D-Wave has actively explored hybrid approaches for use with its quantum annealing computers for some time. It introduced a hybrid solver service (HSS) as part its Leap web access portal and Ocean SDK development kit that D-Wave in 2020. The broad hybrid idea is to use classical compute resources where they make sense – for example, GPUs perform matrix multiplication faster – and use quantum resources where they add benefit.

Continue reading “Quantum Annealing Pioneer D-Wave Introduces Expanded Hybrid Solver” »

Nov 28, 2022

Researchers publish 31,618 molecules with potential for energy storage in batteries

Posted by in categories: chemistry, information science, robotics/AI, supercomputing

Scientists from the Dutch Institute for Fundamental Energy Research (DIFFER) have created a database of 31,618 molecules that could potentially be used in future redox-flow batteries. These batteries hold great promise for energy storage. Among other things, the researchers used artificial intelligence and supercomputers to identify the molecules’ properties. Today, they publish their findings in the journal Scientific Data.

In recent years, chemists have designed hundreds of molecules that could potentially be useful in flow batteries for energy storage. It would be wonderful, researchers from DIFFER in Eindhoven (the Netherlands) imagined, if the properties of these molecules were quickly and easily accessible in a database. The problem, however, is that for many molecules the properties are not known. Examples of molecular properties are redox potential and water solubility. Those are important since they are related to the power generation capability and energy density of redox flow batteries.

To find out the still-unknown properties of molecules, the researchers performed four steps. First, they used a and smart algorithms to create thousands of virtual variants of two types of molecules. These molecule families, the quinones and aza aromatics, are good at reversibly accepting and donating electrons. That is important for batteries. The researchers fed the computer with backbone structures of 24 quinones and 28 aza-aromatics plus five different chemically relevant side groups. From that, the computer created 31,618 different molecules.

Nov 28, 2022

Machine-Learning Model Reveals Protein-Folding Physics

Posted by in categories: biological, information science, physics, robotics/AI

An algorithm that already predicts how proteins fold might also shed light on the physical principles that dictate this folding.

Proteins control every cell-level aspect of life, from immunity to brain activity. They are encoded by long sequences of compounds called amino acids that fold into large, complex 3D structures. Computational algorithms can model the physical amino-acid interactions that drive this folding [1]. But determining the resulting protein structures has remained challenging. In a recent breakthrough, a machine-learning model called AlphaFold [2] predicted the 3D structure of proteins from their amino-acid sequences. Now James Roney and Sergey Ovchinnikov of Harvard University have shown that AlphaFold has learned how to predict protein folding in a way that reflects the underlying physical amino-acid interactions [3]. This finding suggests that machine learning could guide the understanding of physical processes too complex to be accurately modeled from first principles.

Predicting the 3D structure of a specific protein is difficult because of the sheer number of ways in which the amino-acid sequence could fold. AlphaFold can start its computational search for the likely structure from a template (a known structure for similar proteins). Alternatively, and more commonly, AlphaFold can use information about the biological evolution of amino-acid sequences in the same protein family (proteins with similar functions that likely have comparable folds). This information is helpful because consistent correlated evolutionary changes in pairs of amino acids can indicate that these amino acids directly interact, even though they may be far in sequence from each other [4, 5]. Such information can be extracted from the multiple sequence alignments (MSAs) of protein families, determined from, for example, evolutionary variations of sequences across different biological species.

Nov 28, 2022

AI invents millions of materials that don’t yet exist

Posted by in categories: information science, robotics/AI

UC San Diego nanoengineering professor Shyue Ping Ong described M3GNet as “an AlphaFold for materials”, referring to the breakthrough AI algorithm built by Google’s DeepMind that can predict protein structures.

“Similar to proteins, we need to know the structure of a material to predict its properties,” said Professor Ong.

“We truly believe that the M3GNet architecture is a transformative tool that can greatly expand our ability to explore new material chemistries and structures.”

Nov 28, 2022

The Friedmann equations, and how they are related to protests in China

Posted by in categories: biotech/medical, government, information science

NEW DELHI: Among all the protests that have erupted across China following the strict quarantine measures enforced by the government for Covid-19, one form that has stood out is the display of a physics equation.

In images widely being circulated on social media, students of Beijing’s Tsinghua University can be seen holding sheets on which is written one of the Friedmann equations.

What these equations have to do with the subject of the protests is open to speculation. Many on social media have suggested that it is a play on the words “free man”. Another view is that it symbolises a free and “open” China, because the Friedmann equations describe an “open” (expanding) universe.

Nov 28, 2022

Completing Einstein’s Theories — A Particle Physics Breakthrough

Posted by in categories: information science, particle physics

Osaka University researchers show the relativistic contraction of an electric field produced by fast-moving charged particles, as predicted by Einstein’s theory, which can help improve radiation and particle physics research.

Over a century ago, one of the most renowned modern physicists, Albert Einstein, proposed the ground-breaking theory of special relativity. Most of everything we know about the universe is based on this theory, however, a portion of it has not been experimentally demonstrated until now. Scientists from Osaka University’s Institute of Laser Engineering utilized ultrafast electro-optic measurements for the first time to visualize the contraction of the electric field surrounding an electron beam traveling at near the speed of light and demonstrate the generation process.

According to Einstein’s theory of special relativity, one must use a “Lorentz transformation” that combines space and time coordinates in order to accurately describe the motion of objects passing an observer at speeds near the speed of light. He was able to explain how these transformations resulted in self-consistent equations for electric and magnetic fields.