Archive for the ‘robotics/AI’ category: Page 1666
Jun 21, 2020
The case for self-explainable AI
Posted by Genevieve Klien in categories: biotech/medical, information science, robotics/AI
For instance, suppose a neural network has labeled the image of a skin mole as cancerous. Is it because it found malignant patterns in the mole or is it because of irrelevant elements such as image lighting, camera type, or the presence of some other artifact in the image, such as pen markings or rulers?
Researchers have developed various interpretability techniques that help investigate decisions made by various machine learning algorithms. But these methods are not enough to address AI’s explainability problem and create trust in deep learning models, argues Daniel Elton, a scientist who researches the applications of artificial intelligence in medical imaging.
Elton discusses why we need to shift from techniques that interpret AI decisions to AI models that can explain their decisions by themselves as humans do. His paper, “Self-explaining AI as an alternative to interpretable AI,” recently published in the arXiv preprint server, expands on this idea.
Jun 20, 2020
Combining AI and biology could solve drug discovery’s biggest problems
Posted by Derick Lee in categories: biotech/medical, robotics/AI
There’s a lot of hope that artificial intelligence could help speed up the time it takes to make a drug and also increase the rate of success. Several startups have emerged to capitalize on this opportunity. But Insitro is a bit different from some of these other companies, which rely more heavily on machine learning than biology.
Machine learning can speed up the creation of new drugs and unlock the mysteries of major diseases, says Insitro CEO Daphne Koller.
[Photo: Ivan-balvan/iStock]
Continue reading “Combining AI and biology could solve drug discovery’s biggest problems” »
Jun 20, 2020
Engineers Put Tens of Thousands of Artificial Brain Synapses on a Single Chip for Portable AI Devices
Posted by Quinn Sena in categories: robotics/AI, supercomputing
MIT engineers have designed a “brain-on-a-chip,” smaller than a piece of confetti, that is made from tens of thousands of artificial brain synapses known as memristors — silicon-based components that mimic the information-transmitting synapses in the human brain.
The researchers borrowed from principles of metallurgy to fabricate each memristor from alloys of silver and copper, along with silicon. When they ran the chip through several visual tasks, the chip was able to “remember” stored images and reproduce them many times over, in versions that were crisper and cleaner compared with existing memristor designs made with unalloyed elements.
Their results, published on June 8, 2020, in the journal Nature Nanotechnology, demonstrate a promising new memristor design for neuromorphic devices — electronics that are based on a new type of circuit that processes information in a way that mimics the brain’s neural architecture. Such brain-inspired circuits could be built into small, portable devices, and would carry out complex computational tasks that only today’s supercomputers can handle.
Jun 20, 2020
Quickly Embed AI Into Your Projects With Nvidia’s Jetson Nano
Posted by Genevieve Klien in categories: robotics/AI, transportation
When opportunity knocks, open the door: No one has taken heed of that adage like Nvidia, which has transformed itself from a company focused on catering to the needs of video gamers to one at the heart of the artificial-intelligence revolution. In 2001, no one predicted that the same processor architecture developed to draw realistic explosions in 3D would be just the thing to power a renaissance in deep learning. But when Nvidia realized that academics were gobbling up its graphics cards, it responded, supporting researchers with the launch of the CUDA parallel computing software framework in 2006.
Since then, Nvidia has been a big player in the world of high-end embedded AI applications, where teams of highly trained (and paid) engineers have used its hardware for things like autonomous vehicles. Now the company claims to be making it easy for even hobbyists to use embedded machine learning, with its US $100 Jetson Nano dev kit, which was originally launched in early 2019 and rereleased this March with several upgrades. So, I set out to see just how easy it was: Could I, for example, quickly and cheaply make a camera that could recognize and track chosen objects?
Embedded machine learning is evolving rapidly. In April 2019, Hands On looked at Google’s Coral Dev AI board which incorporates the company’s Edge tensor processing unit (TPU), and in July 2019, IEEE Spectrum featured Adafruit’s software library, which lets even a handheld game device do simple speech recognition. The Jetson Nano is closer to the Coral Dev board: With its 128 parallel processing cores, like the Coral, it’s powerful enough to handle a real-time video feed, and both have Raspberry Pi–style 40-pin GPIO connectors for driving external hardware.
Jun 20, 2020
FaceApp’s Gender Swap is a Scary Insight to AI and Privacy Concerns
Posted by Genevieve Klien in categories: privacy, robotics/AI
FaceApp looks pretty harmless. However, when you realise that you are uploading your photos for an AI to work on, things start to look bleak.
Jun 20, 2020
Engineers Design Ion-Based Device That Operates Like an Energy-Efficient Brain Synapse
Posted by Genevieve Klien in category: robotics/AI
Ion-based technology may enable energy-efficient simulations of the brain’s learning process, for neural network AI systems.
Teams around the world are building ever more sophisticated artificial intelligence systems of a type called neural networks, designed in some ways to mimic the wiring of the brain, for carrying out tasks such as computer vision and natural language processing.
Using state-of-the-art semiconductor circuits to simulate neural networks requires large amounts of memory and high power consumption. Now, an MIT team has made strides toward an alternative system, which uses physical, analog devices that can much more efficiently mimic brain processes.
Jun 19, 2020
Teaching physics to neural networks removes ‘chaos blindness’
Posted by Genevieve Klien in categories: biotech/medical, drones, robotics/AI
Researchers from North Carolina State University have discovered that teaching physics to neural networks enables those networks to better adapt to chaos within their environment. The work has implications for improved artificial intelligence (AI) applications ranging from medical diagnostics to automated drone piloting.
Neural networks are an advanced type of AI loosely based on the way that our brains work. Our natural neurons exchange electrical impulses according to the strengths of their connections. Artificial neural networks mimic this behavior by adjusting numerical weights and biases during training sessions to minimize the difference between their actual and desired outputs. For example, a neural network can be trained to identify photos of dogs by sifting through a large number of photos, making a guess about whether the photo is of a dog, seeing how far off it is and then adjusting its weights and biases until they are closer to reality.
The drawback to this neural network training is something called “chaos blindness”—an inability to predict or respond to chaos in a system. Conventional AI is chaos blind. But researchers from NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) have found that incorporating a Hamiltonian function into neural networks better enables them to “see” chaos within a system and adapt accordingly.
Jun 19, 2020
Innovative dataset to accelerate autonomous driving research
Posted by Saúl Morales Rodriguéz in categories: robotics/AI, transportation
How can we train self-driving vehicles to have a deeper awareness of the world around them? Can computers learn from past experiences to recognize future patterns that can help them safely navigate new and unpredictable situations?
These are some of the questions researchers from the AgeLab at the MIT Center for Transportation and Logistics and the Toyota Collaborative Safety Research Center (CSRC) are trying to answer by sharing an innovative new open dataset called DriveSeg.
Through the release of DriveSeg, MIT and Toyota are working to advance research in autonomous driving systems that, much like human perception, perceive the driving environment as a continuous flow of visual information.
Jun 19, 2020
Deep learning-based surrogate models outperform simulators and could hasten scientific discoveries
Posted by Genevieve Klien in categories: physics, robotics/AI
Surrogate models supported by neural networks can perform as well, and in some ways better, than computationally expensive simulators and could lead to new insights in complicated physics problems such as inertial confinement fusion (ICF), Lawrence Livermore National Laboratory (LLNL) scientists reported.
In a paper published by the Proceedings of the National Academy of Sciences (PNAS), LLNL researchers describe the development of a deep learning-driven Manifold & Cyclically Consistent (MaCC) surrogate model incorporating a multi-modal neural network capable of quickly and accurately emulating complex scientific processes, including the high-energy density physics involved in ICF.
The research team applied the model to ICF implosions performed at the National Ignition Facility (NIF), in which a computationally expensive numerical simulator is used to predict the energy yield of a target imploded by shock waves produced by the facility’s high-energy laser. Comparing the results of the neural network-backed surrogate to the existing simulator, the researchers found the surrogate could adequately replicate the simulator, and significantly outperformed the current state-of-the-art in surrogate models across a wide range of metrics.