Introduction
When Oregon State University’s Kyle Niemeyer, associate professor of mechanical engineering, arrived at the Salishan Lodge on the Oregon coast this spring, he found himself in an unusually rare position for an academic: one of only a handful of university researchers in a room dominated by the architects of the world’s most powerful supercomputers.
The Salishan Conference on High-Speed Computing is an invitation‑only meeting that has quietly shaped the trajectory of high‑performance computing (HPC) for more than four decades. Founded in 1981 and organized primarily by a committee drawn from the U.S. Department of Energy’s “tri‑labs” — Los Alamos, Lawrence Livermore, and Sandia national laboratories — the conference convenes about 150 experts each year for off‑the‑record, candid discussions about where computing is headed and what comes next.
It is not a conference most faculty ever encounter. In fact, Niemeyer is only the second scholar from an Oregon university to be invited as a speaker in the past decade, a reflection of how closely Salishan is tied to the national‑lab ecosystem and industry giants like Google, Intel, and NVIDIA rather than academia.
“I didn’t even know the meeting existed until they reached out,” Niemeyer said. The invitation came through a web of connections forged by years of research collaborations and participation in Department of Energy-funded programs focused on computational science. Those links eventually landed him on the Salishan program — not to talk about computer hardware, but about what all that computing power can realistically accomplish.
Burning through exascale — and hitting its limits
Niemeyer’s invited talk, “Burning Through Exascale: Reactive Flow Simulation, AI Surrogates, and the Road to Combustion Digital Twins,” tackled a question that sits at the intersection of physics, computing, and hype: What can today’s fastest supercomputers actually do for combustion modeling, and what can’t they do?
Combustion is a notoriously demanding problem for simulation, involving interactions among turbulent fluid flow, chemistry, and heat transfer across many scales. Despite decades of progress and the arrival of exascale machines capable of a billion billion calculations per second, Niemeyer’s message was blunt.
“Even with the most powerful systems we have or will have soon, some of the simulations people would love to run are still impossible,” he said.
associate professor of mechanical engineering
Blue Primary, Yellow Secondary
Among the desired simulations that won’t be coming to fruition: fully resolving a gas‑turbine combustor with no simplifying assumptions, predicting rocket engine instabilities that still cause catastrophic failures, or modeling next‑generation internal combustion engines running on alternative fuels. On paper, exascale computing has promised breakthroughs. In practice, many of those problems would still take centuries of computing time.
That reality also shaped Niemeyer’s response to one of the most talked‑about topics at Salishan: artificial intelligence.
“Generative AI doesn’t really help you model combustion,” he said. “Language models don’t know physics.”
While AI can be useful inside constrained, physics‑informed frameworks or as part of surrogate modeling approaches, Niemeyer argued that it cannot replace first‑principles simulations. In fact, AI models depend on massive amounts of high‑quality data that only traditional HPC simulations can generate.
“There’s no shortcut,” he said. “AI systems are useless without the data, and those data come from large‑scale physics‑based computation.”
A candid look at HPC’s future
That mix of realism and technical depth resonated at Salishan, where conversation often extends well beyond prepared talks. Many presentations leaned heavily toward computer systems, architectures, and operations, making Niemeyer’s application‑focused talk stand out.
“I had people asking me to keep talking,” he said. “They wanted to understand what’s actually possible, not just what sounds impressive.”
Beyond combustion, several broader themes emerged. One was the growing diversity of computing architectures. The era of a single, general‑purpose supercomputer is fading, replaced by increasingly heterogeneous systems optimized for specific workloads, including simulation, data analysis, or AI.
Another recurring topic was AI’s effect on the workforce. Discussions were candid, touching on fears that coding agents could erode early‑career pathways for engineers and researchers. The consensus, Niemeyer said, was that these tools amplify expertise rather than replace it, but only in the hands of people who already understand the science.
“Give these tools to someone who doesn’t know what they’re doing, and it actually makes them worse,” he said.
associate professor of mechanical engineering
Blue Primary, Yellow Secondary
Research and computing at Oregon State
Back in Corvallis, Niemeyer is deeply involved in efforts to strengthen Oregon State’s computational research ecosystem. His own work spans reactive flow modeling, GPU‑accelerated simulation, and open‑source scientific software, often in collaboration with national laboratories and DOE programs. He is a key contributor to CEMeNT and CARRE, OSU‑led, DOE‑funded Predictive Science Academic Alliance Program centers focused on developing exascale‑ready simulation tools and applying high‑performance computing to problems in particle transport and radiation effects, respectively.
He also plays a role in training the next generation of researchers through courses and short summer workshops on research software engineering, a skill that is increasingly essential as science becomes further indistinguishable from computation.
More broadly, Niemeyer sees OSU sitting at an important inflection point for high‑performance computing.
In recent years, the university has moved toward a more centralized approach to research computing, recognizing that fragmented, underutilized systems are inefficient and limit ambition. That shift is embodied in the forthcoming NVIDIA supercomputer to be installed in the Huang Collaborative Innovation Complex, which is expected to dramatically expand campus‑wide computing capacity.
While the system won’t compete with leadership-class national lab machines, such as El Capitan and Frontier, Niemeyer believes it will be transformative for certain classes of problems and act as a catalyst for community building around computational research.
“We have a lot of faculty doing HPC‑driven work, but historically they’ve had to run it elsewhere,” he said. “This gives us a stronger base to build from.”
Ultimately, Niemeyer’s experience at Salishan underscored both how far high‑performance computing has come and how much remains unresolved.
“The future is going to be messier and more diverse than people expect,” he said. “But if we’re honest about the limits — and invest in people as much as machines — that’s where real progress happens.”