Toggle light / dark theme

When opportunity knocks, open the door: No one has taken heed of that adage like Nvidia, which has transformed itself from a company focused on catering to the needs of video gamers to one at the heart of the artificial-intelligence revolution. In 2001, no one predicted that the same processor architecture developed to draw realistic explosions in 3D would be just the thing to power a renaissance in deep learning. But when Nvidia realized that academics were gobbling up its graphics cards, it responded, supporting researchers with the launch of the CUDA parallel computing software framework in 2006.

Since then, Nvidia has been a big player in the world of high-end embedded AI applications, where teams of highly trained (and paid) engineers have used its hardware for things like autonomous vehicles. Now the company claims to be making it easy for even hobbyists to use embedded machine learning, with its US $100 Jetson Nano dev kit, which was originally launched in early 2019 and rereleased this March with several upgrades. So, I set out to see just how easy it was: Could I, for example, quickly and cheaply make a camera that could recognize and track chosen objects?

Embedded machine learning is evolving rapidly. In April 2019, Hands On looked at Google’s Coral Dev AI board which incorporates the company’s Edge tensor processing unit (TPU), and in July 2019, IEEE Spectrum featured Adafruit’s software library, which lets even a handheld game device do simple speech recognition. The Jetson Nano is closer to the Coral Dev board: With its 128 parallel processing cores, like the Coral, it’s powerful enough to handle a real-time video feed, and both have Raspberry Pi–style 40-pin GPIO connectors for driving external hardware.

Ion-based technology may enable energy-efficient simulations of the brain’s learning process, for neural network AI systems.

Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing.

Using state-of-the-art semiconductor circuits to simulate neural networks requires large amounts of memory and high power consumption. Now, an MIT team has made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.

Researchers from North Carolina State University have discovered that teaching physics to neural networks enables those networks to better adapt to chaos within their environment. The work has implications for improved artificial intelligence (AI) applications ranging from medical diagnostics to automated drone piloting.

Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks mimic this behavior by adjusting numerical weights and biases during training sessions to minimize the difference between their actual and desired outputs. For example, a can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.

The drawback to this is something called “ blindness”—an inability to predict or respond to chaos in a system. Conventional AI is chaos blind. But researchers from NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) have found that incorporating a Hamiltonian function into neural networks better enables them to “see” chaos within a system and adapt accordingly.

How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations?

These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the Toyota Collaborative Safety Research Center (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg.

Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like , perceive the driving environment as a continuous flow of visual information.

Surrogate models supported by neural networks can perform as well, and in some ways better, than computationally expensive simulators and could lead to new insights in complicated physics problems such as inertial confinement fusion (ICF), Lawrence Livermore National Laboratory (LLNL) scientists reported.

In a paper published by the Proceedings of the National Academy of Sciences (PNAS), LLNL researchers describe the development of a deep learning-driven Manifold & Cyclically Consistent (MaCC) surrogate model incorporating a multi-modal neural network capable of quickly and accurately emulating complex scientific processes, including the high-energy density physics involved in ICF.

The research team applied the model to ICF implosions performed at the National Ignition Facility (NIF), in which a computationally expensive numerical simulator is used to predict the energy yield of a target imploded by shock waves produced by the facility’s high-energy laser. Comparing the results of the neural network-backed surrogate to the existing simulator, the researchers found the surrogate could adequately replicate the simulator, and significantly outperformed the current state-of-the-art in surrogate models across a wide range of metrics.

Amazon Web Services recently had to defend against a DDoS attack with a peak traffic volume of 2.3 Tbps, the largest ever recorded, ZDNet reports. Detailing the attack in its Q1 2020 threat report, Amazon said that the attack occurred back in February, and was mitigated by AWS Shield, a service designed to protect customers of Amazon’s on-demand cloud computing platform from DDoS attacks, as well as from bad bots and application vulnerabilities. The company did not disclose the target or the origin of the attack.

To put that number into perspective, prior to February of this year, ZDNet notes that the largest DDoS attack recorded was back in March 2018, when NetScout Arbor mitigated a 1.7 Tbps attack. The previous month, GitHub disclosed that it had been hit by an attack with a peak of 1.35 Tbps.

The discovery that led Nir Shavit to start a company came about the way most discoveries do: by accident. The MIT professor was working on a project to reconstruct a map of a mouse’s brain and needed some help from deep learning. Not knowing how to program graphics cards, or GPUs, the most common hardware choice for deep-learning models, he opted instead for a central processing unit, or CPU, the most generic computer chip found in any average laptop.

“Lo and behold,” Shavit recalls, “I realized that a CPU can do what a GPU does—if programmed in the right way.”

This insight is now the basis for his startup, Neural Magic, which launched its first suite of products today. The idea is to allow any company to deploy a deep-learning model without the need for specialized hardware. It would not only lower the costs of deep learning but also make AI more widely accessible.

Qualcomm today announced its RB5 reference design platform for the robotics and intelligent drone ecosystem. As the field of robotics continues to evolve towards more advanced capabilities, Qualcomm’s latest platform should help drive the next step in robotics evolution with intelligence and connectivity. The company has combined its 5G connectivity and AI-focused processing along with a flexible peripherals architecture based on what they are calling “mezzanine” modules. The new Qualcomm RB5 platform promises an acceleration in the robotics design and development process with a full suite of hardware, software and development tools. The company is making big promises for the RB5 platform, and if current levels of ecosystem engagement are any indicator, the platform will have ample opportunities to prove itself.

Targeting robot and drone designs meant for enterprise, industrial and professional service applications, at the heart of the platform is Qualcomm’s QRB5165 system on chip (SOC) processor. The QRB5165 is derived from the Snapdragon 865 processor used in mobile devices, but customized for robotic applications with increased camera and image signal processor (ISP) capabilities for additional camera sensors, higher industrial grade temperature and security ratings and a non-Package-on-Package (POP) configuration option.

To help bring highly capable artificial intelligence and machine learning capabilities to bear in these applications, the chip is rated for 15 Tera Operations Per Second (TOPS) of AI performance. Additionally, as it is critical that robots and drones can “see” their surroundings, the architecture also includes support for up to seven concurrent cameras and a dedicated computer vision engine meant to provide enhanced video analytics. Given the sheer amount of information that the platform can generate, process and analyze, the platform also has support for a communications module boasting 4G and 5G connectivity speeds. In particular, the addition of 5G to the platform will allow high speed and low latency data connectivity to the robots or drones.