Toggle light / dark theme

TACC’s “Horizon” Supercomputer Sets The Pace For Academic Science

As we expected, the “Vista” supercomputer that the Texas Advanced Computing Center installed last year as a bridge between the current “Stampede-3” and “Frontera” production system and its future “Horizon” system coming next year was indeed a precursor of the architecture that TACC would choose for the Horizon machine.

What TACC does – and doesn’t do – matters because as the flagship datacenter for academic supercomputing at the National Science Foundation, the company sets the pace for those HPC organizations that need to embrace AI and that have not only large jobs that require an entire system to run (so-called capability-class machines) but also have a wide diversity of smaller jobs that need to be stacked up and pushed through the system (making it also a capacity-class system). As the prior six major supercomputers installed at TACC aptly demonstrate, you can have the best of both worlds, although you do have to make different architectural choices (based on technology and economics) to accomplish what is arguably a tougher set of goals.

Some details of the Horizon machine were revealed at the SC25 supercomputing conference last week, which we have been mulling over, but there are still a lot of things that we don’t know. The Horizon that will be fired up in the spring of 2026 is a bit different than we expected, with the big change being a downshift from an expected 400 petaflops of peak FP64 floating point performance down to 300 petaflops. TACC has not explained the difference, but it might have something to do with the increasing costs of GPU-accelerated systems. As far as we know, the budget for the Horizon system, which was set in July 2024 and which includes facilities rental from Sabey Data Centers as well as other operational costs, is still $457 million. (We are attempting to confirm this as we write, but in the wake of SC25 and ahead of the Thanksgiving vacation, it is hard to reach people.)

Polymathic: Simulation is one of the cornerstone tools of modern science and engineering

Using simulation-based techniques, scientists can ask how their ideas, actions, and designs will interact with the physical world. Yet this power is not without costs. Cutting edge simulations can often take months of supercomputer time. Surrogate models and machine learning are promising alternatives for accelerating these workflows, but the data hunger of machine learning has limited their impact to data-rich domains. Over the last few years, researchers have sought to side-step this data dependence through the use of foundation models— large models pretrained on large amounts of data which can accelerate the learning process by transferring knowledge from similar inputs, but this is not without its own challenges.

A New Bridge Links the Strange Math of Infinity to Computer Science

All of modern mathematics is built on the foundation of set theory, the study of how to organize abstract collections of objects. But in general, research mathematicians don’t need to think about it when they’re solving their problems. They can take it for granted that sets behave the way they’d expect, and carry on with their work.

Descriptive set theorists are an exception. This small community of mathematicians never stopped studying the fundamental nature of sets — particularly the strange infinite ones that other mathematicians ignore.

Their field just got a lot less lonely. In 2023, a mathematician named Anton Bernshteyn (opens a new tab) published a deep and surprising connection (opens a new tab) between the remote mathematical frontier of descriptive set theory and modern computer science.

Golden Fractal Jubilee: 50 Years of Bridging Art and Science

We investigate the artistic patterns generated by the pouring technique made famous by Jackson Pollock. To determine if poured patterns can be distinguished based on the artist age, we apply computer analysis techniques to paintings created under controlled conditions by children (four to six years old) and adults (18–25 years old) pouring fluid paint onto horizontal sheets of paper. Both groups of art display a high visual complexity due to the multi-scaled paint structure generated by the pouring process. However, the two groups demonstrate statistically significant differences when this structure is quantified using both multifractal and lacunarity analysis. Whereas the multifractal analysis probes the scaling characteristics of the patterns, lacunarity quantifies clustering in their spatial distributions. We find that the children’s paintings are characterized by smaller fractal dimensions (indicating a reduced contribution of fine structure) and by larger lacunarity parameters (indicating a larger clustering of this fine structure) compared to the adult paintings. We compare these results to those of two famous poured works by Jackson Pollock and Max Ernst as a preliminary step to investigating the potential origins of the fractal and lacunarity variations across artists, which includes motions related to biomechanical balance. Finally, to examine the impact on audiences, we ask observers to rate their perceptions of the paintings. These ratings indicate a rise in interest and pleasantness for paintings with lower fractal dimensions and larger lacunarity.

The interface between art and science has grown over the past three decades with the advent of statistical analysis of the visual characteristics of art works. Although such studies now encompass a broad range of artistic styles, substantial research has been devoted to paintings generated by pouring paint onto the canvas rather than by using traditional brush contact. A number of Twentieth Century artists pursued this technique, including the European Surrealists [1], the Canadian Les Automatists [2], and the American Abstract Expressionists [3]. The latter featured the most famous proponent of the ‘pouring’ technique, Jackson Pollock [4].

Celebrated as Action Painting, these poured works serve as records of the artists’ encounters with their canvases. In Pollock’s case, this encounter involved him painting in the three-dimensional space above the canvas and then letting gravity condense the fluid paint onto the two-dimensional plane of the canvas laid out across the floor. This dynamic process often unfolded at frantic painting speeds, inviting speculation from art critics and the public alike as to whether it is possible to control the pouring technique. Perhaps all artists are instead destined to generate haphazard records of their encounters with the canvas. This debate has been fueled by the lack of traditional compositional strategies displayed in typical poured works — no center of focus, no left or right, and no up or down [3, 4].

Early experiments in accelerating science with GPT-5

Most strikingly, the paper claims four genuinely new mathematical results, carefully verified by the human mathematicians involved. In a discipline where truth is eternal and progress is measured in decades, an AI contributed novel insights that helped settle previously unsolved problems. The authors stress these contributions are “modest in scope but profound in implication”—not because they’re minor, but because they represent a proof of concept. If GPT-5 can do this now, what comes next?

The paper carries an undercurrent of urgency: many scientists still don’t realize what’s possible. The authors are essentially saying, “Look, this is already working for us—don’t get left behind.” Yet they avoid boosterism, emphasizing the technology’s current limitations as clearly as its strengths.


What we’re learning from collaborations with scientists.

The science of consciousness

Humans know they exist, but how does “knowing” work? Despite all that’s been learned about brain function and the bodily processes it governs, we still don’t understand where the subjective experiences associated with brain functions originate.

A new interdisciplinary project seeks to find answers to these kinds of big questions around consciousness, a fundamental yet elusive phenomenon.

The MIT Consciousness Club is co-led by philosopher Matthias Michel, the Old Dominion Career Development Professor in the Department of Linguistics and Philosophy, and Earl Miller, the Picower Professor of Neuroscience in the Department of Brain and Cognitive Sciences.

/* */