Toggle light / dark theme

Neuromorphic memory device simulates neurons and synapses

Researchers have reported a nano-sized neuromorphic memory device that emulates neurons and synapses simultaneously in a unit cell, another step toward completing the goal of neuromorphic computing designed to rigorously mimic the human brain with semiconductor devices.

Neuromorphic computing aims to realize (AI) by mimicking the mechanisms of neurons and that make up the . Inspired by the cognitive functions of the human brain that current computers cannot provide, neuromorphic devices have been widely investigated. However, current Complementary Metal-Oxide Semiconductor (CMOS)-based neuromorphic circuits simply connect artificial neurons and synapses without synergistic interactions, and the concomitant implementation of neurons and synapses still remains a challenge. To address these issues, a research team led by Professor Keon Jae Lee from the Department of Materials Science and Engineering implemented the biological working mechanisms of humans by introducing the neuron-synapse interactions in a single memory cell, rather than the conventional approach of electrically connecting artificial neuronal and synaptic devices.

Similar to commercial graphics cards, the artificial synaptic devices previously studied often used to accelerate parallel computations, which shows clear differences from the operational mechanisms of the human brain. The research team implemented the synergistic interactions between neurons and synapses in the neuromorphic memory device, emulating the mechanisms of the biological neural network. In addition, the developed neuromorphic device can replace complex CMOS neuron circuits with a single device, providing high scalability and cost efficiency.

Passwordless logins boost security for device and account access

Learn how your company can create applications to automate tasks and generate further efficiencies through low-code/no-code tools on November 9 at the virtual Low-Code/No-Code Summit. Register here.

With the increasing digitization of services across multiple industries, large corporations are pushing for new security measures to keep their customers’ documents and sensitive information secure. Among these measures are passwordless logins, with new authentication methods adding an extra layer of data protection.

The transition to passwordless logins is undeniable, with approximately 60% of large and global enterprises and 90% of midsize enterprises predicted to adopt passwordless methods in at least 50% of use cases, according to a recent Gartner study. This comes as no surprise, as security problems associated with password-only authentication are among the digital world’s biggest vulnerabilities. Consumers are often tempted to reuse passwords across different services due to the difficulty of managing so many passwords.

Google’s new AI can hear a snippet of song—and then keep on playing

A new AI system can create natural-sounding speech and music after being prompted with a few seconds of audio.

AudioLM, developed by Google researchers, generates audio that fits the style of the prompt, including complex sounds like piano music, or people speaking, in a way that is almost indistinguishable from the original recording. The technique shows promise for speeding up the process of training AI to generate audio, and it could eventually be used to auto-generate music to accompany videos.

A Trial Run for Smart Streaming Readouts

Jefferson Lab tests a next-generation data acquisition scheme

Nuclear physics experiments worldwide are becoming ever more data intensive as researchers probe ever more deeply into the heart of matter. To get a better handle on the data, nuclear physicists are now turning to artificial intelligence and machine learning methods to help sift through the torrent in real-time.

A recent test of two systems that employ such methods at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility found that they can, indeed, enable real-time processing of raw data. Such systems could result in a streamlined data analysis process that is faster and more efficient, while also keeping more of the original data for future analysis than conventional systems. An article describing this work was recently published in The European Physical Journal Plus .

X·terroir

The advanced Computer Vision and Artificial Intelligence technologies in X·TERROIR allow enologists to make optimal decisions about the wine destination of grapes.

X·TERROIR technology makes possible cost-effective phenotypic profiling of every vine in the vineyard. This is an exponential increase over what is possible with current technology. The more information that Enologists have to work their magic… the more quality and value they can extract from the vineyard.

Transcript:

To the naked eye, this vineyard looks homogeneous. One might assume that a vineyard like this will produce grapes that are fairly uniform in aromatic profile.
The reality is very different.

The grapes from this vine, will produce a different wine than the vines that are only 50 meters away. Plant genomics, varying soil types, cultural interventions, micro-climate, and even disease result in a substantial variety of aromatic expressions in a single vineyard.

So the grapes that this vine will produce will embody the unique complexity of the aromatic expressions of the soil, climate, and cultural interventions of its own micro-terroir.

The End of Programming

The end of classical Computer Science is coming, and most of us are dinosaurs waiting for the meteor to hit.

I came of age in the 1980s, programming personal computers like the Commodore VIC-20 and Apple ][e at home. Going on to study Computer Science in college and ultimately getting a PhD at Berkeley, the bulk of my professional training was rooted in what I will call “classical” CS: programming, algorithms, data structures, systems, programming languages. In Classical Computer Science, the ultimate goal is to reduce an idea to a program written by a human — source code in a language like Java or C++ or Python. Every idea in Classical CS — no matter how complex or sophisticated — from a database join algorithm to the mind-bogglingly obtuse Paxos consensus protocol — can be expressed as a human-readable, human-comprehendible program.

When I was in college in the early ’90s, we were still in the depth of the AI Winter, and AI as a field was likewise dominated by classical algorithms. My first research job at Cornell was working with Dan Huttenlocher, a leader in the field of computer vision (and now Dean of the MIT School of Computing). In Dan’s PhD-level computer vision course in 1995 or so, we never once discussed anything resembling deep learning or neural networks—it was all classical algorithms like Canny edge detection, optical flow, and Hausdorff distances. Deep learning was in its infancy, not yet considered mainstream AI, let alone mainstream CS.

/* */