Archive for the ‘information science’ category: Page 105
Dec 1, 2022
This Artificial Intelligence Paper Presents an Advanced Method for Differential Privacy in Image Recognition with Better Accuracy
Posted by Genevieve Klien in categories: biotech/medical, finance, information science, robotics/AI
Machine learning has increased considerably in several areas due to its performance in recent years. Thanks to modern computers’ computing capacity and graphics cards, deep learning has made it possible to achieve results that sometimes exceed those experts give. However, its use in sensitive areas such as medicine or finance causes confidentiality issues. A formal privacy guarantee called differential privacy (DP) prohibits adversaries with access to machine learning models from obtaining data on specific training points. The most common training approach for differential privacy in image recognition is differential private stochastic gradient descent (DPSGD). However, the deployment of differential privacy is limited by the performance deterioration caused by current DPSGD systems.
The existing methods for differentially private deep learning still need to operate better since that, in the stochastic gradient descent process, these techniques allow all model updates regardless of whether the corresponding objective function values get better. In some model updates, adding noise to the gradients might worsen the objective function values, especially when convergence is imminent. The resulting models get worse as a result of these effects. The optimization target degrades, and the privacy budget is wasted. To address this problem, a research team from Shanghai University in China suggests a simulated annealing-based differentially private stochastic gradient descent (SA-DPSGD) approach that accepts a candidate update with a probability that depends on the quality of the update and the number of iterations.
Concretely, the model update is accepted if it gives a better objective function value. Otherwise, the update is rejected with a certain probability. To prevent settling into a local optimum, the authors suggest using probabilistic rejections rather than deterministic ones and limiting the number of continuous rejections. Therefore, the simulated annealing algorithm is used to select model updates with probability during the stochastic gradient descent process.
Dec 1, 2022
New AI-enabled study unravels the principles of aging
Posted by Shubham Ghosh Roy in categories: biotech/medical, information science, life extension, robotics/AI
New work from Gero, conducted in collaboration with researchers from Roswell Park Comprehensive Cancer Center and Genome Protection Inc. and published in Nature Communications, demonstrates the power of AI combined with analytical tools borrowed from the physics of complex systems to provide insights into the nature of aging, resilience and future medical interventions for age-related diseases including cancer.
Longevity. Technology: Modern AI systems exhibit superhuman-level performance in medical diagnostics applications, such as identifying cancer on MRI scans. This time, the researchers took one step further and used AI to figure out principles that describe how the biological process of aging unfolds in time.
The researchers trained an AI algorithm on a large dataset composed of multiple blood tests taken along the life course of tens of thousands of aging mice to predict the future health state of an animal from its current state. The artificial neural network precisely projected the health condition of an aging mouse with the help of a single variable, which was termed dynamic frailty indicator (dFI) that accurately characterises the damage that an animal accumulates throughout life [1].
Dec 1, 2022
We built an algorithm that predicts the length of court sentences — could AI play a role in the justice system?
Posted by Saúl Morales Rodriguéz in categories: information science, law, robotics/AI
Artificial intelligence could help create transparency and consistency in the legal system – our model shows how.
Nov 30, 2022
In reinforcement learning, slower networks can learn faster
Posted by Dan Kummer in categories: entertainment, information science
We then tested the new algorithms, called DQN with Proximal updates (or DQN Pro) and Rainbow Pro on a standard set of 55 Atari games. We can see from the graph of the results that the Pro agents overperform their counterparts; the basic DQN agent is able to obtain human-level performance after 120 million interactions with the environment (frames); and Rainbow Pro achieves a 40% relative improvement over the original Rainbow agent.
Further, to ensure that proximal updates do in fact result in smoother and slower parameter changes, we measure the norm differences between consecutive DQN solutions. We expect the magnitude of our updates to be smaller when using proximal updates. In the graphs below, we confirm this expectation on the four different Atari games tested.
Overall, our empirical and theoretical results support the claim that when optimizing for a new solution in deep RL, it is beneficial for the optimizer to gravitate toward the previous solution. More importantly, we see that simple improvements in deep-RL optimization can lead to significant positive gains in the agent’s performance. We take this as evidence that further exploration of optimization algorithms in deep RL would be fruitful.
Nov 30, 2022
This Artificial Intelligence (AI) Model Knows How to Detect Novel Objects During Object Detection
Posted by Kelvin Dafiaghor in categories: climatology, information science, robotics/AI
Object detection has been an important task in the computer vision domain in recent decades. The goal is to detect instances of objects, such as humans, cars, etc., in digital images. Hundreds of methods have been developed to answer a single question: What objects are where?
Traditional methods tried to answer this question by extracting hand-crafted features like edges and corners within the image. Most of these approaches used a sliding-window approach, meaning that they kept checking small parts of the image in different scales to see if any of these parts contained the object they were looking for. This was really time-consuming, and even the slightest change in the object shape, lightning, etc., could have caused the algorithm to miss it.
Then there came the deep learning era. With the increasing capability of computer hardware and the introduction of large-scale datasets, it became possible to exploit the advancement in the deep learning domain to develop a reliable and robust object detection algorithm that could work in an end-to-end manner.
Nov 29, 2022
Quantum Annealing Pioneer D-Wave Introduces Expanded Hybrid Solver
Posted by Quinn Sena in categories: computing, information science, quantum physics
D-Wave Systems, a pioneer in quantum annealing-based computing, today announced significant upgrades to its constrained quadratic model (CQM) hybrid solver that should make it easier to use and able to tackle much larger problems, said the company. The model can now handle optimization problems with up to 1 million variables (including continuous variables) and 100,000 constraints. In addition, D-Wave has introduced a “new [pre-solver] set of fast classical algorithms that reduces the size of the problem and allows for larger models to be submitted to the hybrid solver.”
While talk of using hybrid quantum-classical solutions has intensified recently among the gate-based quantum computer developer community, D-Wave has actively explored hybrid approaches for use with its quantum annealing computers for some time. It introduced a hybrid solver service (HSS) as part its Leap web access portal and Ocean SDK development kit that D-Wave in 2020. The broad hybrid idea is to use classical compute resources where they make sense – for example, GPUs perform matrix multiplication faster – and use quantum resources where they add benefit.
Continue reading “Quantum Annealing Pioneer D-Wave Introduces Expanded Hybrid Solver” »
Nov 28, 2022
Researchers publish 31,618 molecules with potential for energy storage in batteries
Posted by Shubham Ghosh Roy in categories: chemistry, information science, robotics/AI, supercomputing
Scientists from the Dutch Institute for Fundamental Energy Research (DIFFER) have created a database of 31,618 molecules that could potentially be used in future redox-flow batteries. These batteries hold great promise for energy storage. Among other things, the researchers used artificial intelligence and supercomputers to identify the molecules’ properties. Today, they publish their findings in the journal Scientific Data.
In recent years, chemists have designed hundreds of molecules that could potentially be useful in flow batteries for energy storage. It would be wonderful, researchers from DIFFER in Eindhoven (the Netherlands) imagined, if the properties of these molecules were quickly and easily accessible in a database. The problem, however, is that for many molecules the properties are not known. Examples of molecular properties are redox potential and water solubility. Those are important since they are related to the power generation capability and energy density of redox flow batteries.
To find out the still-unknown properties of molecules, the researchers performed four steps. First, they used a desktop computer and smart algorithms to create thousands of virtual variants of two types of molecules. These molecule families, the quinones and aza aromatics, are good at reversibly accepting and donating electrons. That is important for batteries. The researchers fed the computer with backbone structures of 24 quinones and 28 aza-aromatics plus five different chemically relevant side groups. From that, the computer created 31,618 different molecules.
Nov 28, 2022
Machine-Learning Model Reveals Protein-Folding Physics
Posted by Saúl Morales Rodriguéz in categories: biological, information science, physics, robotics/AI
An algorithm that already predicts how proteins fold might also shed light on the physical principles that dictate this folding.
Proteins control every cell-level aspect of life, from immunity to brain activity. They are encoded by long sequences of compounds called amino acids that fold into large, complex 3D structures. Computational algorithms can model the physical amino-acid interactions that drive this folding [1]. But determining the resulting protein structures has remained challenging. In a recent breakthrough, a machine-learning model called AlphaFold [2] predicted the 3D structure of proteins from their amino-acid sequences. Now James Roney and Sergey Ovchinnikov of Harvard University have shown that AlphaFold has learned how to predict protein folding in a way that reflects the underlying physical amino-acid interactions [3]. This finding suggests that machine learning could guide the understanding of physical processes too complex to be accurately modeled from first principles.
Predicting the 3D structure of a specific protein is difficult because of the sheer number of ways in which the amino-acid sequence could fold. AlphaFold can start its computational search for the likely structure from a template (a known structure for similar proteins). Alternatively, and more commonly, AlphaFold can use information about the biological evolution of amino-acid sequences in the same protein family (proteins with similar functions that likely have comparable folds). This information is helpful because consistent correlated evolutionary changes in pairs of amino acids can indicate that these amino acids directly interact, even though they may be far in sequence from each other [4, 5]. Such information can be extracted from the multiple sequence alignments (MSAs) of protein families, determined from, for example, evolutionary variations of sequences across different biological species.
Nov 28, 2022
AI invents millions of materials that don’t yet exist
Posted by Kelvin Dafiaghor in categories: information science, robotics/AI
UC San Diego nanoengineering professor Shyue Ping Ong described M3GNet as “an AlphaFold for materials”, referring to the breakthrough AI algorithm built by Google’s DeepMind that can predict protein structures.
“Similar to proteins, we need to know the structure of a material to predict its properties,” said Professor Ong.
“We truly believe that the M3GNet architecture is a transformative tool that can greatly expand our ability to explore new material chemistries and structures.”