Menu

Blog

Archive for the ‘information science’ category: Page 151

Apr 1, 2021

ReRAM Machine Learning Embraces Variability

Posted by in categories: information science, robotics/AI

Algorithms may be key to effectively using ReRAM devices in edge-learning systems, turning a ReRAM disadvantage to good use.

Apr 1, 2021

Selective time-dependent changes in activity and cell-specific gene expression in human postmortem brain

Posted by in categories: biotech/medical, information science, neuroscience

As brain activity-dependent human genes are of great importance in human neuropsychiatric disorders we also examined the expression of these genes to postmortem RNAseq databases from patients suffering from various neurological and psychiatric disorders (Table 1). Datasets were chosen based on similarities in tissue processing and RNAseq methodology to our own protocol. We performed a PCA (Principal Component Analysis) of our fresh brain compared to postmortem brain from healthy, Parkinson’s, Schizophrenia, Huntington’s, and Autism brains for the top 500 brain activity-dependent genes that showed the greatest reduction in the healthy postmortem samples. The PCA revealed a significant separation between the 4 fresh samples and the postmortem samples, independent of whether or not the fresh tissue was from epileptic (high activity, H) or non-epileptic (low activity, L) brain regions (Fig. 2 J). This further demonstrates a selective reduction of activity-dependent genes in postmortem brain independent of whether the underlying tissue is electrically active or not.

The sudden removal of brain tissue from a living person in many ways mimics a catastrophic event that occurs with a hypoxic brain injury or a traumatic death with exsanguination. The human brain has high energy needs, estimated to be 10 times that of other tissues21. As a means to understand how the postmortem interval selectively affects some genes and not others in human neocortex, we performed RNAseq and histological analyses in cortical brain tissue as a function of time from 0–24 h at 24 °C in order to simulate a postmortem interval. Neuropathological examination of the tissue used for this study showed a normal-appearing cortical pattern with no histopathologic abnormalities. RNAseq analysis showed a loss of brain activity-dependent genes that were 3-times more prone to be degraded than expected by chance compared to more stable housekeeping genes (Table 2). The threshold to detect activity-dependent genes was related to the probability of being affected by the PMI. The higher the relative expression of the brain activity gene, the more it was enriched in the population of genes affected by the PMI. These findings confirm that genes involved in brain activity are more prone to degradation during the PMI.

One possible explanation for the selective loss of activity-dependent genes could relate to the stability of various cell populations during the simulated PMI. As a means to implicate specific cell populations that could be responsible for the reduction of genes during the simulated PMI we used a clustering algorithm as we have previously described9. We found that 1427 genes (71% known brain activity-dependent genes) could be clustered across the seven time points of the simulated PMI. For these clusters, we used AllegroMcode to identify two main clusters. One cluster of 317 rapidly declining genes was predicted to be neuronal and strongly overlapped with the activity-dependent genes. A second cluster of 474 genes was predicted to be glial, including astrocytes and microglia (Fig. 3A). Remarkably, as the neuronal cell cluster rapidly fell, there was a reciprocal and dramatic increase in the expression of the glial cell cluster (Fig. 3B).

Mar 27, 2021

DARPA Hopes to Improve Computer Vision in ‘Third Wave’ of AI Research

Posted by in categories: information science, military, robotics/AI

The military’s primary advanced research shop wants to be a leader in the “third wave” of artificial intelligence and is looking at new methods of visually tracking objects using significantly less power while producing results that are 10-times more accurate.

The Defense Advanced Research Projects Agency, or DARPA, has been instrumental in many of the most important breakthroughs in modern technology—from the first computer networks to early AI research.

“DARPA-funded R&D enabled some of the first successes in AI, such as expert systems and search, and more recently has advanced machine learning algorithms and hardware,” according to a notice for an upcoming opportunity.

Mar 26, 2021

Reinforcement learning with artificial microswimmers

Posted by in categories: biological, chemistry, information science, mathematics, particle physics, policy, robotics/AI

Artificial microswimmers that can replicate the complex behavior of active matter are often designed to mimic the self-propulsion of microscopic living organisms. However, compared with their living counterparts, artificial microswimmers have a limited ability to adapt to environmental signals or to retain a physical memory to yield optimized emergent behavior. Different from macroscopic living systems and robots, both microscopic living organisms and artificial microswimmers are subject to Brownian motion, which randomizes their position and propulsion direction. Here, we combine real-world artificial active particles with machine learning algorithms to explore their adaptive behavior in a noisy environment with reinforcement learning. We use a real-time control of self-thermophoretic active particles to demonstrate the solution of a simple standard navigation problem under the inevitable influence of Brownian motion at these length scales. We show that, with external control, collective learning is possible. Concerning the learning under noise, we find that noise decreases the learning speed, modifies the optimal behavior, and also increases the strength of the decisions made. As a consequence of time delay in the feedback loop controlling the particles, an optimum velocity, reminiscent of optimal run-and-tumble times of bacteria, is found for the system, which is conjectured to be a universal property of systems exhibiting delayed response in a noisy environment.

Living organisms adapt their behavior according to their environment to achieve a particular goal. Information about the state of the environment is sensed, processed, and encoded in biochemical processes in the organism to provide appropriate actions or properties. These learning or adaptive processes occur within the lifetime of a generation, over multiple generations, or over evolutionarily relevant time scales. They lead to specific behaviors of individuals and collectives. Swarms of fish or flocks of birds have developed collective strategies adapted to the existence of predators (1), and collective hunting may represent a more efficient foraging tactic (2). Birds learn how to use convective air flows (3). Sperm have evolved complex swimming patterns to explore chemical gradients in chemotaxis (4), and bacteria express specific shapes to follow gravity (5).

Inspired by these optimization processes, learning strategies that reduce the complexity of the physical and chemical processes in living matter to a mathematical procedure have been developed. Many of these learning strategies have been implemented into robotic systems (7–9). One particular framework is reinforcement learning (RL), in which an agent gains experience by interacting with its environment (10). The value of this experience relates to rewards (or penalties) connected to the states that the agent can occupy. The learning process then maximizes the cumulative reward for a chain of actions to obtain the so-called policy. This policy advises the agent which action to take. Recent computational studies, for example, reveal that RL can provide optimal strategies for the navigation of active particles through flows (11–13), the swarming of robots (14–16), the soaring of birds , or the development of collective motion (17).

Mar 26, 2021

New imaging algorithm can spot fast-moving and rotating space junk

Posted by in categories: information science, satellites

Technology could help prevent damage to satellites.

Mar 24, 2021

Crucial Milestone for Scalable Quantum Technology: 2D Array of Semiconductor Qubits That Functions as a Quantum Processor

Posted by in categories: computing, information science, quantum physics

The heart of any computer, its central processing unit, is built using semiconductor technology, which is capable of putting billions of transistors onto a single chip. Now, researchers from the group of Menno Veldhorst at QuTech, a collaboration between TU Delft and TNO, have shown that this technology can be used to build a two-dimensional array of qubits to function as a quantum processor. Their work, a crucial milestone for scalable quantum technology, was published today (March 242021) in Nature.

Quantum computers have the potential to solve problems that are impossible to address with classical computers. Whereas current quantum devices hold tens of qubits — the basic building block of quantum technology — a future universal quantum computer capable of running any quantum algorithm will likely consist of millions to billions of qubits. Quantum dot qubits hold the promise to be a scalable approach as they can be defined using standard semiconductor manufacturing techniques. Veldhorst: “By putting four such qubits in a two-by-two grid, demonstrating universal control over all qubits, and operating a quantum circuit that entangles all qubits, we have made an important step forward in realizing a scalable approach for quantum computation.”

Mar 24, 2021

Tiny swimming robots reach their target faster thanks to AI nudges

Posted by in categories: information science, particle physics, robotics/AI

Swimming robots the size of bacteria can be knocked off course by particles in the fluid they are moving through, but an AI algorithm learns from feedback to get them to their target quickly.

Mar 23, 2021

‘Doodles of light’ in real time mark leap for holograms at home

Posted by in categories: holograms, information science, supercomputing

Researchers from Tokyo Metropolitan University have devised and implemented a simplified algorithm for turning freely drawn lines into holograms on a standard desktop CPU. They dramatically cut down the computational cost and power consumption of algorithms that require dedicated hardware. It is fast enough to convert writing into lines in real time, and makes crisp, clear images that meet industry standards. Potential applications include hand-written remote instructions superimposed on landscapes and workbenches.

T potential applications of holography include important enhancements to vital, practical tasks, including remote instructions for surgical procedures, electronic assembly on circuit boards, or directions projected on landscapes for navigation. Making holograms available in a wide range of settings is vital to bringing this technology out of the lab and into daily life.

One of the major drawbacks of this state-of-the-art technology is the computational load of generation. The kind of quality we’ve come to expect in our 2D displays is prohibitive in 3D, requiring supercomputing levels of number crunching to achieve. There is also the issue of power consumption. More widely available hardware like GPUs in gaming rigs might be able to overcome some of these issues with raw power, but the amount of electricity they use is a major impediment to mobile applications. Despite improvements to available hardware, the solution can’t be achieved by brute force.

Mar 22, 2021

Researchers’ algorithm designs soft robots that sense

Posted by in categories: information science, robotics/AI

There are some tasks that traditional robots — the rigid and metallic kind — simply aren’t cut out for. Soft-bodied robots, on the other hand, may be able to interact with people more safely or slip into tight spaces with ease. But for robots to reliably complete their programmed duties, they need to know the whereabouts of all their body parts. That’s a tall task for a soft robot that can deform in a virtually infinite number of ways.

MIT researchers have developed an algorithm to help engineers design soft robots that collect more useful information about their surroundings. The deep-learning algorithm suggests an optimized placement of sensors within the robot’s body, allowing it to better interact with its environment and complete assigned tasks. The advance is a step toward the automation of robot design. “The system not only learns a given task, but also how to best design the robot to solve that task,” says Alexander Amini. “Sensor placement is a very difficult problem to solve. So, having this solution is extremely exciting.”

The research will be presented during April’s IEEE International Conference on Soft Robotics and will be published in the journal IEEE Robotics and Automation Letters. Co-lead authors are Amini and Andrew Spielberg, both PhD students in MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Other co-authors include MIT PhD student Lillian Chin, and professors Wojciech Matusik and Daniela Rus.

Mar 21, 2021

Deep learning model advances how robots can independently grasp objects

Posted by in categories: information science, robotics/AI

Robots are unable to perform everyday manipulation tasks, such as grasping or rearranging objects, with the same dexterity as humans. But Brazilian scientists have moved this research a step further by developing a new system that uses deep learning algorithms to improve a robot’s ability to independently detect how to grasp an object, known as autonomous robotic grasp detection.

In a paper published Feb. 24 in Robotics and Autonomous Systems, a team of engineers from the University of São Paulo addressed existing problems with the visual perception phase that occurs when a robot grasps an object. They created a model using deep learning neural networks that decreased the time a robot needs to process visual data, perceive an object’s location and successfully grasp it.

Deep learning is a subset of machine learning, in which computer algorithms are trained how to learn with data and to improve automatically through experience. Inspired by the structure and function of the human brain, deep learning uses a multilayered structure of algorithms called neural networks, operating much like the human brain in identifying patterns and classifying different types of information. Deep learning models are often based on convolutional neural networks, which specialize in analyzing visual imagery.