ESRL Quarterly Newsletter - Fall 2009

Speeding Up Science

Cheap, poweful processors designed for life-like video games improve NOAA weather, climate models

ESRL’s Mark Govett felt a bit out of place at a conference in California last fall. He was surrounded by hundreds of video game developers and players, few older than 30. Gaggles of teenagers were in attendance, drawn by a gaming contest involving incredibly realistic electronic simulations of people.

Mark Govett

Govett, 52, has worked for 24 years at NOAA, steadily improving the computer systems used to predict weather and climate. The corners of his eyes wrinkle when he smiles.

Like everyone else at last year’s GPU Technology Conference in San Jose, Govett wanted to learn more about the latest innovations in computer processing. For most attendees, those innovations would lead to better games. Govett and a handful of others were more interested in how GPUs—graphics processing units—might improve science.

“We are getting to the point where parallel processors based on CPUs cannot do what we need,” Govett said. An experimental weather and climate model in development now at ESRL, for example, will eventually require about 200,000 CPU cores to spit out operational forecasts, he estimated. Weather forecasting requires speed: Useful forecast models must run at about 2 percent of real time. ESRL’s fastest machine has about 5,000 cores and occupies about 15 closet-sized racks. The space and energy requirements of such banks of machines are becoming onerous. “They’re going to require small power plants to run our models,” Govett said. “Clearly, we need another way.”

GPUs may not be the only answer, he said: There are other types of fast, co-processing chips. But for now, Govett and colleagues believe GPUs are one of the most promising. The processors are staggeringly powerful, cheap, and require far less energy and space than CPU-based computing systems. Govett’s small team in the Global Systems Division is now leading NOAA’s investigation into GPUs, following in the footsteps of ESRL Director Sandy MacDonald. Mac- Donald helped revolutionize NOAA’s computing power 18 years ago, with parallel processors that were many times less expensive—and faster—than Crays and other vector computers of the time.

Results from a sample FIM run, overlaid on Google Earth. Icosahedrons, the computational unit in FIM, are visible. Colors indicate predicted temperature on the 250-km resolution grid shown. Graphic by Jeff Smith and Evan Polster in the Global Systems Division.

Powering the improvements in GPUs is the video game industry. In just one quarter of 2008, GPU manufacturers around the world shipped out more than 110 million units, according to Jon Peddie Research, an industry consulting firm. “Science is riding on the back of this,” Govett said.

A GPU differs from a conventional CPU in the structure of the chip. GPUs are typically split up into thousands of lightweight microprocessors called threads, which run in small batches and use fast and slow memory to maximize performance. The programming trick, Govett said, is to take advantage of the fast memory, so computations are never waiting in line.

“Vendors have been saying that GPUs have 100 times the performance of CPUs,” Govett said. “We asked, To what extent can we take advantage of that?”

Until recently, it wasn’t feasible to even try. To program GPUs, researchers would need to write in a low-level machine language that would be impractical. “People knew the promise was huge, but we couldn’t work with them yet,” Govett said.

Then in 2007, a major GPU manufacturer, made public its language, CUDA, which is similar to C. Most scientists still program in Fortran, so for most academic researchers, it was too difficult to convert their codes to CUDA—it would take too long to get results. But Govett’s background is in computer languages, and he was able to write translational software that does 95 percent of the work automatically. “That has allowed us to make progress faster than anyone else,” he said.

His team purchased a 16-node GPU from NVIDIA last year. Now, Govett, Craig Tierney, Leslie Hart, Tom Henderson, and Jacques Middelcoff are converting model code to CUDA and optimizing it for high performance on the GPU system.

The initial results have been incredibly exciting, Govett said. Running part of FIM—the Flowfollowing finite-volume Icosahedral Model, ESRL’s experimental weather and climate model being used to improve hurricane prediction—was 15 to 20 times faster on the GPU node than on the CPU, Tierney reported nine months ago.

Now Govett has a large segment of the next-generation NIM model (Non-hydrostatic Icosahedral Model) running on a GPU. NIM is an even more computationally intensive model that is expected to run at resolutions of 3-4 km within two years.

“To date, no global model has been run at such a high resolution in realtime because of the compute requirements are so large,” Govett said. “It may not even be possible to run the NIM at this fine scale in realtime without something like a GPU.“ In recent experiments, the code has run 25 times faster on the GPU than the CPU. “I’m just so excited about this I’m working on this at night and weekends,” Govett said.

He acknowledged that there remain uncertainties in the use of GPUs for high-performance computing. It’s not entirely clear yet, for example, how efficiently GPU units will perform when linked in parallel. “But we believe we have good strategies for dealing with that issue,” Govett said.

He put up his translational software for free online a few months ago, and users around the world have downloaded it. An industry group has offered a similar product, and Govett’s team plans to run tests to compare the efficiency of the two programs.

“GPUs are seen now as the next revolutionary advance in computing, and NOAA is looking to our laboratory to do this kind of research,” Govett said. “In terms of atmospheric science, we’ve come further, faster, with GPUs than anyone else out there. It’s very exciting to be working at ESRL on this.”

More on the Web