Menu

Blog

Archive for the ‘supercomputing’ category: Page 39

Sep 3, 2022

Quantum Matter Is Being Studied At A Temperature 3 Billion Times Colder Than Deep Space

Posted by in categories: particle physics, quantum physics, space, supercomputing

A team of Japanese and US physicists has pushed thousands of Ytterbium atoms to just within a billionth of a degree above absolute zero to understand how matter behaves at these extreme temperatures. The approach treats the atoms as fermions, the type of particles like electrons and protons, that cannot end up in the so-called fifth state of matter at those extreme temperatures: a Bose-Einstein Condensate.

When fermions are actually cooled down, they do exhibit quantum properties in a way that we can’t simulate even with the most powerful supercomputer. These extremely cold atoms are placed in a lattice and they simulate a “Hubbard model” which is used to study the magnetic and superconductive behavior of materials, in particular the collective motion of electrons through them.

The symmetry of these models is known as the special unitary group, or, SU, and depends on the possible spin state. In the case of Ytterbium, that number is 6. Calculating the behavior of just 12 particles in a SU Hubbard model can’t be done with computers. However, as reported in Nature Physics, the team used laser cooling to reduce the temperature of 300,000 atoms to a value almost three billion times colder than the temperature of outer space.

Sep 2, 2022

Revolutionizing image generation through AI: Turning text into images

Posted by in categories: information science, robotics/AI, supercomputing

Creating images from text in seconds—and doing so with a conventional graphics card and without supercomputers? As fanciful as it may sound, this is made possible by the new Stable Diffusion AI model. The underlying algorithm was developed by the Machine Vision & Learning Group led by Prof. Björn Ommer (LMU Munich).

“Even for laypeople not blessed with artistic talent and without special computing know-how and , the new model is an effective tool that enables computers to generate images on command. As such, the model removes a barrier to expressing their creativity,” says Ommer. But there are benefits for seasoned artists as well, who can use Stable Diffusion to quickly convert new ideas into a variety of graphic drafts. The researchers are convinced that such AI-based tools will be able to expand the possibilities of creative image generation with paintbrush and Photoshop as fundamentally as computer-based word processing revolutionized writing with pens and typewriters.

In their project, the LMU scientists had the support of the start-up Stability. Ai, on whose servers the AI model was trained. “This additional computing power and the extra training examples turned our AI model into one of the most powerful image synthesis algorithms,” says the computer scientist.

Aug 31, 2022

Making Computer Chips Act More like Brain Cells

Posted by in categories: biological, chemistry, neuroscience, supercomputing

The human brain is an amazing computing machine. Weighing only three pounds or so, it can process information a thousand times faster than the fastest supercomputer, store a thousand times more information than a powerful laptop, and do it all using no more energy than a 20-watt lightbulb.

Researchers are trying to replicate this success using soft, flexible organic materials that can operate like biological neurons and someday might even be able to interconnect with them. Eventually, soft “neuromorphic” computer chips could be implanted directly into the brain, allowing people to control an artificial arm or a computer monitor simply by thinking about it.

Like real neurons — but unlike conventional computer chips — these new devices can send and receive both chemical and electrical signals. “Your brain works with chemicals, with neurotransmitters like dopamine and serotonin. Our materials are able to interact electrochemically with them,” says Alberto Salleo, a materials scientist at Stanford University who wrote about the potential for organic neuromorphic devices in the 2021 Annual Review of Materials Research.

Aug 30, 2022

ROBE Array could let small companies access popular form of AI

Posted by in categories: information science, robotics/AI, supercomputing

A breakthrough low-memory technique by Rice University computer scientists could put one of the most resource-intensive forms of artificial intelligence—deep-learning recommendation models (DLRM)—within reach of small companies.

DLRM recommendation systems are a popular form of AI that learns to make suggestions users will find relevant. But with top-of-the-line training models requiring more than a hundred terabytes of memory and supercomputer-scale processing, they’ve only been available to a short list of technology giants with deep pockets.

Rice’s “random offset block embedding ,” or ROBE Array, could change that. It’s an algorithmic approach for slashing the size of DLRM memory structures called embedding tables, and it will be presented this week at the Conference on Machine Learning and Systems (MLSys 2022) in Santa Clara, California, where it earned Outstanding Paper honors.

Aug 28, 2022

Inside Tesla’s Innovative And Homegrown “Dojo” AI Supercomputer

Posted by in categories: military, nuclear weapons, robotics/AI, space travel, supercomputing

How expensive and difficult does hyperscale-class AI training have to be for a maker of self-driving electric cars to take a side excursion to spend how many hundreds of millions of dollars to go off and create its own AI supercomputer from scratch? And how egotistical and sure would the company’s founder have to be to put together a team that could do it?

Like many questions, when you ask these precisely, they tend to answer themselves. And what is clear is that Elon Musk, founder of both SpaceX and Tesla as well as a co-founder of the OpenAI consortium, doesn’t have time – or money – to waste on science projects.

Continue reading “Inside Tesla’s Innovative And Homegrown ‘Dojo’ AI Supercomputer” »

Aug 27, 2022

Wickedly Fast Frontier Supercomputer Officially Ushers in the Next Era of Computing

Posted by in categories: mathematics, supercomputing

Today, Oak Ridge National Laboratory’s Frontier supercomputer was crowned fastest on the planet in the semiannual Top500 list. Frontier more than doubled the speed of the last titleholder, Japan’s Fugaku supercomputer, and is the first to officially clock speeds over a quintillion calculations a second—a milestone computing has pursued for 14 years.

That’s a big number. So before we go on, it’s worth putting into more human terms.

Imagine giving all 7.9 billion people on the planet a pencil and a list of simple arithmetic or multiplication problems. Now, ask everyone to solve one problem per second for four and half years. By marshaling the math skills of the Earth’s population for a half-decade, you’ve now solved over a quintillion problems.

Aug 25, 2022

Supercomputer Emulator—AI’s New Role in Science

Posted by in categories: robotics/AI, science, supercomputing

Bishop: They can still be computationally very expensive. Additionally, emulators learn from data, so they’re typically not more accurate than the data used to train them. Moreover, they may give insufficiently accurate results when presented with scenarios that are markedly different from those on which they’re trained.

“I believe in “use-inspired basic research”—[like] the work of Pasteur. He was a consultant for the brewing industry. Why did this beer keep going sour? He basically founded the whole field of microbiology.” —Chris Bishop, Microsoft Research.

Aug 24, 2022

Supercomputing center dataset aims to accelerate AI research into optimizing high-performance computing systems

Posted by in categories: biotech/medical, employment, robotics/AI, supercomputing

When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner—and the team is looking for ways to improve.

“We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC.

To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

Aug 19, 2022

11 Top Experts: Quantum Top Trends 2023 And 2030

Posted by in categories: economics, finance, government, information science, quantum physics, robotics/AI, supercomputing

Quantum Information Science / Quantum Computing (QIS / QC) continues to make substantial progress into 2023 with commercial applications coming where difficult practical problems can be solved significantly faster using QC (quantum advantage) and QC solving seemingly impossible problems and test cases (not practical problems) that for classical computers such as supercomputers would take thousands of years or beyond classical computing capabilities (quantum supremacy). Often the two terms are interchanged. Claims of quantum advantage or quantum supremacy, at times, are able to be challenged through new algorithms on classical computers.

The potential is for hybrid systems with quantum computers and classical computers such as supercomputers (and perhaps analog computing in the future) could operate in the thousands and potentially millions of times faster in lending more understanding into intractable challenges and problems. Imagine the possibilities and the implications for the benefit of Earth’s ecosystems and humankind significantly impacting in dozens of areas of computational science such as big data analytics, weather forecasting, aerospace and novel transportation engineering, novel new energy paradigms such as renewable energy, healthcare and drug discovery, omics (genomics, transcriptomics, proteomics, metabolomic), economics, AI, large-scale simulations, financial services, new materials, optimization challenges, … endless.

The stakes are so high in competitive and strategic advantage that top corporations and governments are investing in and working with QIS / QC. (See my Forbes article: Government Deep Tech 2022 Top Funding Focus Explainable AI, Photonics, Quantum—they (BDC Deep Tech Fund) invested in QC company Xanadu). For the US, in 2018, there is the USD $1.2 billion National Quantum Initiative Act and related U.S. Department of Energy providing USD $625 million over five years for five quantum information research hubs led by national laboratories: Argonne, Brookhaven, Fermi, Lawrence Berkeley and Oak Ridge. In August 2022, the US CHIPS and Science Act providing hundreds of millions in funding as well. Coverage includes: accelerating the discovery of quantum applications; growing a diverse and domestic quantum workforce; development of critical infrastructure and standardization of cutting-edge R&D.

Aug 18, 2022

20 exaFLOP supercomputer proposed for 2025

Posted by in categories: robotics/AI, supercomputing

The U.S. Department of Energy (DOE) has published a request for information from computer hardware and software vendors to assist in the planning, design, and commission of next-generation supercomputing systems.

The DOE request calls for computing systems in the 2025–2030 timeframe that are five to 10 times faster than those currently available and/or able to perform more complex applications in “data science, artificial intelligence, edge deployments at facilities, and science ecosystem problems, in addition to traditional modelling and simulation applications.”

U.S. and Slovakia-based company Tachyum has now responded with its proposal for a 20 exaFLOP system. This would be based on Prodigy, its flagship product and described as the world’s first “universal” processor. According to Tachyum, the chip integrates 128 64-bit compute cores running at 5.7 GHz and combining the functionality of a CPU, GPU, and TPU into a single device with homogeneous architecture. This allows Prodigy to deliver performance at up to 4x that of the highest performing x86 processors (for cloud workloads) and 3x that of the highest performing GPU for HPC and 6x for AI applications.

Page 39 of 97First3637383940414243Last