Menu

Blog

Archive for the ‘supercomputing’ category: Page 84

Apr 7, 2016

Nvidia Unveils New Deep Learning System for Supercomputers

Posted by in categories: robotics/AI, supercomputing

Nvidia’s interim solution to QC.


Nvidia has announced a new deep learning system for supercomputers, deep learning, and artificial intelligence firms, alongside a new high-end GPU.

Read more

Apr 7, 2016

Quantum effects affect the best superconductor

Posted by in categories: particle physics, quantum physics, supercomputing, transportation

The theoretical results of a piece of international research published in Nature, whose first author is Ion Errea, a researcher at the UPV/EHU and DIPC, suggest that the quantum nature of hydrogen (in other words, the possibility of it behaving like a particle or a wave) considerably affects the structural properties of hydrogen-rich compounds (potential room-temperature superconducting substances). This is in fact the case of the superconductor hydrogen sulphide: a stinking compound that smells of rotten eggs, which when subjected to pressures a million times higher than atmospheric pressure, behaves like a superconductor at the highest temperature ever identified. This new advance in understanding the physics of high-temperature superconductivity could help to drive forward progress in the search for room-temperature superconductors, which could be used in levitating trains or next-generation supercomputers, for example.

Superconductors are materials that carry electrical current with zero electrical resistance. Conventional or low-temperature ones behave that way only when the substance is cooled down to temperatures close to absolute zero (−273 °C o 0 degrees Kelvin). Last year, however, German researchers identified the high-temperature superconducting properties of hydrogen sulphide which makes it the superconductor at the highest temperature ever discovered: −70 °C or 203 K.

Read more

Apr 7, 2016

Quantum technologies: from mobile phones to supercomputers

Posted by in categories: encryption, mobile phones, quantum physics, supercomputing

Beautiful future lays ahead in QC.


Quantum physics not only explains how matter behaves at the subatomic level, but is also used to create many devices in our everyday lives, from lasers and transistors to GPS and mobile phones. The next wave of innovation could lead to unbreakable encryption and computers that are up to one million times faster. On 6 April, Parliament’s Science and Technology Options Assessment (STOA) unit organised a workshop to discuss with experts the potential of these new quantum technologies.

Exploiting the quirks of the quantum world

Continue reading “Quantum technologies: from mobile phones to supercomputers” »

Apr 5, 2016

Nvidia Unveils New Supercomputers and AI Algorithms

Posted by in categories: information science, robotics/AI, space travel, supercomputing, virtual reality

Big day for Nvidia with announcements on AI and VR.


The first day of the company’s GPU Technology Conference was chock full of self-driving cars, trips to Mars, and more.

Read more

Apr 5, 2016

Here’s how Nvidia is powering an autonomous, electric race car

Posted by in categories: robotics/AI, supercomputing, transportation

Could we see race car driver careers become all AI? Nvidia is testing the concept.


Formula E is going completely autonomous with the all-new Roborace series slated for the upcoming race season. At its GTC developer conference, Nvidia announced these autonomous, electric race cars will be powered by Nvidia Drive PX 2, a supercomputer built for self-driving cars.

Drive PX 2 is powered by 12 CPU cores and four Pascal GPUs that provides eight teraflops of computer power. The supercomputer-in-a-box is vital to deep learning and trains artificial intelligence to adapts to different driving conditions, including asphalt, rain and dirt.

Continue reading “Here’s how Nvidia is powering an autonomous, electric race car” »

Apr 5, 2016

NVIDIA Reinvents The GPU For Artificial Intelligence (AI)

Posted by in categories: mobile phones, robotics/AI, supercomputing, transportation

At a time when PCs have become rather boring and the market has stagnated, the Graphics Processing Unit (GPU) has become more interesting and not for what it has traditionally done (graphical user interface), but for what it can do going forward. GPUs are a key enabler for the PC and workstation market, both for enthusiast seeking to increase graphics performance for games and developers and designers looking to create realistic new videos and images. However, the traditional PC market has been in decline for several years as consumer shift to mobile computing solutions like smartphones. At the same time, the industry has been working to expand the use of GPUs as a computing accelerator because of the massive parallel compute capabilities, often providing the horsepower for top supercomputers. NVIDIA has been a pioneer in this GPU compute market with its CUDA platform, enabling leading researchers to perform leading edge research and continue to develop new uses for GPU acceleration.

Now, the industry is looking to leverage over 40 years of GPU history and innovation to create more advanced computer intelligence. Through the use of sensors, increased connectivity, and new learning technique, researchers can enable artificial intelligence (AI) applications for everything from autonomous vehicles to scientific research. This, however, requires unprecedented levels of computing power, something the NVIDIA is driven to provide. At the GPU Technology Conference (GTC) in San Jose, California, NVIDIA just announced a new GPU platform that takes computing to the extreme. NVIDIA introduced the Telsa P100 platform. NVIDIA CEO Jen-Hsun Huang described the Tesla P100 as the first GPU designed for hyperscale datacenter applications. It features NVIDIA’s new Pascal GPU architecture, the latest memory and semiconductor process, and packaging technology – all to create the densest compute platform to date.

Read more

Mar 30, 2016

IBM’s ‘brain-inspired’ supercomputer to help watch over US nuclear arsenal

Posted by in categories: military, robotics/AI, supercomputing

Lawrence Livermore National Laboratory says collaboration project with IBM “could change how we do science”.

Read more

Mar 29, 2016

Researchers Found a Way to Shrink a Supercomputer to the Size of a Laptop

Posted by in categories: energy, nanotechnology, supercomputing

Scientists at the University of Lund in Sweden have found a way to use “biological motors” for parallel computing. The findings could mean vastly more powerful and energy efficient computers in a decade’s time.

Nanotechnologists at Lund University in Sweden have discovered a way to miniaturize the processing power that is found today only in the largest and most unwieldy of supercomputers. Their findings, which were published in the Proceedings of the National Academy of Sciences, point the way to a future when our laptops and other personal, handheld computing devices pack the computational heft of a Cray Titan or IBM Blue Gene/Q.

But the solution may be a little surprising.

Continue reading “Researchers Found a Way to Shrink a Supercomputer to the Size of a Laptop” »

Mar 29, 2016

Neuromorphic supercomputer has 16 million neurons

Posted by in categories: information science, neuroscience, robotics/AI, supercomputing

Today, Lawrence Livermore National Lab (LLNL) and IBM announced the development of a new Scale-up Synaptic Supercomputer (NS16e) that highly integrates 16 TrueNorth Chips in a 4×4 array to deliver 16 million neurons and 256 million synapses. LLNL will also receive an end-to-end software ecosystem that consists of a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement.

The $1 million computer has 16 IBM microprocessors designed to mimic the way the brain works.

IBM says it will be five to seven years before TrueNorth sees widespread commercial use, but the Lawrence Livermore test is a big step in that direction.

Continue reading “Neuromorphic supercomputer has 16 million neurons” »

Mar 28, 2016

IBM wants to accelerate AI learning with new processor tech

Posted by in categories: robotics/AI, supercomputing

Deep neural networks (DNNs) can be taught nearly anything, including how to beat us at our own games. The problem is that training AI systems ties up big-ticket supercomputers or data centers for days at a time. Scientists from IBM’s T.J. Watson Research Center think they can cut the horsepower and learning times drastically using “resistive processing units,” theoretical chips that combine CPU and non-volatile memory. Those could accelerate data speeds exponentially, resulting in systems that can do tasks like “natural speech recognition and translation between all world languages,” according to the team.

So why does it take so much computing power and time to teach AI? The problem is that modern neural networks like Google’s DeepMind or IBM Watson must perform billions of tasks in in parallel. That requires numerous CPU memory calls, which quickly adds up over billions of cycles. The researchers debated using new storage tech like resistive RAM that can permanently store data with DRAM-like speeds. However, they eventually came up with the idea for a new type of chip called a resistive processing unit (RPU) that puts large amounts of resistive RAM directly onto a CPU.

Read more

Page 84 of 96First8182838485868788Last