DeepMind’s new machine learning algorithm takes less than a minute to make its forecasts and can run on a desktop. But it won’t replace traditional forecasts anytime soon.
Open-source supercomputer algorithm predicts patterning and dynamics of living materials and enables studying their behavior in space and time.
Biological materials are made of individual components, including tiny motors that convert fuel into motion. This creates patterns of movement, and the material shapes itself with coherent flows by constant consumption of energy. Such continuously driven materials are called “active matter.” The mechanics of cells and tissues can be described by active matter theory, a scientific framework to understand shape, flows, and form of living materials. The active matter theory consists of many challenging mathematical equations.
Scientists from the Max Planck Institute of Molecular Cell.
How are we so smart? We seem to be able to make process data with ease, doing tasks in seconds that take supercomputers much longer. Well, one thought is that we fundamentally take advantage of quantum mechanics to perform calculations similar to a quantum computer. This would give us a biologically produced quantum speed up in our brains. Until recently this was just a thought, there is no evidence that this is true. Well, now scientists believe that they may have found evidence of quantum interaction in our brains. Even more importantly, they showed that these quantum interactions are related to our consciousness. In this video, I discuss these latest results.
— References —
[1] https://iopscience.iop.org/article/10.1088/2399-6528/ac94be.
[2] https://phys.org/news/2022-10-brains-quantum.html.
[3] https://scitechdaily.com/shocking-experiment-indicates-our-b…mputation/
— Socials –
Twitter: https://twitter.com/BroadwayPhysics.
Discord: https://discord.gg/SH4xvHQY
Publications: https://scholar.google.com.au/citations?user=f-yIdjMAAAAJ&hl=en&authuser=2
— Equipment —
If you are interested in some of the equipment that I use to make these videos you can find the information below.
Camera: https://amzn.to/3VSpxfY
Audio: https://amzn.to/3Mgv3pw and https://amzn.to/3LXF7CH and https://amzn.to/3HXfTmE
Lighting: https://amzn.to/41qYKbS and https://amzn.to/3O5Vekp.
Teleprompter: https://amzn.to/3puDrZI
0:00 — Quantum Brains?
0:37 — Why is this good?
2:41 — Measuring entanglement.
5:33 — Quantum Consciouness.
#quantum #science #physics #breakthrough #quantumcomputer
An open-source advanced supercomputer algorithm predicts the patterning and dynamics of living materials, allowing for the exploration of their behaviors across space and time.
Biological materials consist of individual components, including tiny motors that transform fuel into motion. This process creates patterns of movement, leading the material to shape itself through coherent flows driven by constant energy consumption. These perpetually driven materials are called “active matter.”
The mechanics of cells and tissues can be described by active matter theory, a scientific framework to understand the shape, flows, and form of living materials. The active matter theory consists of many challenging mathematical equations.
Biological materials are made of individual components, including tiny motors that convert fuel into motion. This creates patterns of movement, and the material shapes itself with coherent flows by constant consumption of energy. Such continuously driven materials are called active matter.
The mechanics of cells and tissues can be described by active matter theory, a scientific framework to understand the shape, flow, and form of living materials. The active matter theory consists of many challenging mathematical equations.
Scientists from the Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG) in Dresden, the Center for Systems Biology Dresden (CSBD), and the TU Dresden have now developed an algorithm, implemented in an open-source supercomputer code, that can for the first time solve the equations of active matter theory in realistic scenarios.
Undeterred after three decades of looking, and with some assistance from a supercomputer, mathematicians have finally discovered a new example of a special integer called a Dedekind number.
Only the ninth of its kind, or D, it is calculated to equal 286 386 577 668 298 411 128 469 151 667 598 498 812 366, if you’re updating your own records. This 42 digit monster follows the 23-digit D discovered in 1991.
Grasping the concept of a Dedekind number is difficult for non-mathematicians, let alone working it out. In fact, the calculations involved are so complex and involve such huge numbers, it wasn’t certain that D would ever be discovered.
A US national lab has started training a massive AI brain that could ultimately become the must-have computing resource for scientific researchers.
Argonne National Laboratory (ANL) is creating a generative AI model called AuroraGPT and is pouring a giant mass of scientific information into creating the brain.
The model is being trained on its Aurora supercomputer, which delivers more than an half an exaflop performance at ANL. The system has Intel’s Ponte Vecchio GPUs, which provide the main computing power.
The world’s most valuable chip maker has announced a next-generation processor for AI and high-performance computing workloads, due for launch in mid-2024. A new exascale supercomputer, designed specifically for large AI models, is also planned.
H200 Tensor Core GPU. Credit: NVIDIA
In recent years, California-based NVIDIA Corporation has played a major role in the progress of artificial intelligence (AI), as well as high-performance computing (HPC) more generally, with its hardware being central to astonishing leaps in algorithmic capability.
In this article we look at several introductions of digital storage related products at the 2023 Supercomputing Conference.
WDC was also showing its hybrid storage JBOD Ultrastar Data102 and Data60 platforms to support disaggregated storage and software-defined storage (SDS). This comes in dual-port SAS or single-port SATA configurations. The Data102 has storage capacities up to 2.65PB and the Data60 has up to 1.56TB in a 4U enclosure that includes IsoVibe and ArticFlow technologies for improved performance and reliability. The Data102 and Data60 capacity numbers assume using 26TB SMR HDDs.
WDC was also showing a GPUDirect storage proof of concept combining the company’s RaidFlex technology with Ingrasys ES2100 with integrated NVIDIA Spectrum Ethernet switches as well as NVIDIA’s GPUs, Magnum IO GPUDirect storage, BlueField DPUs and ConnectX SmartNICs. The proof-of-concept demonstration can provide 25GB/s bandwidth for a single NVIDIA A100 Tensor Core GPU and over 100GB/s for four NVIDIA A100 GPUs.
At SC23 Arcitecta and DDN introduced software defined storage solutions for AI and cloud applications. WDC was also showing SDS, its OpenFlex NVMe storage and GPUDirect storage.
The only AI Hardware startup to realize revenue exceeding $100M has finished the first phase of Condor Galaxy 1 AI Supercomputer with partner G42 of the UAE. Other Cerebras customers are sharing their CS-2 results at Supercomputing ‘23, building momentum for the inventor of wafer-scale computing. This company is on a tear.
Four short months ago, Cerebras announced the most significant deal any AI startup has been able to assemble with partner G42 (Group42), an artificial intelligence and cloud computing company. The eventual 256 CS-2 wafer-scale nodes with 36 Exaflops of AI performance will be one of the world’s largest AI supercomputers, if not the largest.
Cerebras has now finished the first data center implementation and started on the second. These two companies are moving fast to capitalize on the $70B (2028) gold rush to stand up Large Language Model services to researchers and enterprises, especially while the supply of NVIDIA H100 remains difficult to obtain, creating an opportunity for Cerebras. In addition, Cerebras has recently announced it has released the largest Arabic Language Model, the Jais30B with Core42 using the CS-2, a platform designed to make the development of massive AI models accessible by eliminating the need to decompose and distribute the problem.