News & Events

  • Man Invents New Language for Turning Graphics Chips Into Supercomputers

    Date: 07/02/2013

    A graphics processing unit, which processes so much more than graphics. Photo: Flickr/Wimox
    GPU stands for graphics processing unit, but these tiny chips can be used for much more than just graphics. Google is using GPUs to model the human brain, and Salesforce leans on them as a way of analyzing data streaming across Twitter feeds. They’re particularly suited to what’s known as parallel processing, where thousands of tasks are executed at the same time.

    The trick is that you have to build new software that’s specifically designed to tap into these chips. But a computer science Ph.D. candidate at Indiana University wants to help with that. He just released a new programming language called Harlan dedicated to building applications that run GPUs. “GPU programming still requires the programmer to manage a lot of low-level details that often distract them from the core of what they’re trying to do,” says Eric Holk. “We wanted a system that could manage these details for the programmer, letting them be more productive and still getting good performance from the GPU.”

    The vast majority of your computer’s calculations are handled by the central processing unit, or CPU. A CPU handles a single sequence of computations, called a thread, at one time, executing it as quickly as possible. A GPU is designed to process multiple threads at once. Those threads are executed more slowly, but a program can be designed to take advantage of parallelism to actually run faster than a program that executes one thread at a time — much like a supercomputer.

    Although CPUs — such as the multicore processors popular today — can do parallelism, they are still generally optimized for running single-thread operations, Holk explains.

    The term GPU wasn’t coined until 1999, but the earliest video processing chips were introduced in the 1970s and 1980s, according to a paper on the history of GPU architecture by Chris McClanahan of Georgia Tech University. These chips still relied heavily on the CPU for graphic processing offloading only part of the job, but graphics cards became more popular and powerful in the 1990s with the advent of 3-D graphics.

    “The evolution of GPU hardware architecture has gone from a specific single core, fixed function hardware pipeline implementation made solely for graphics, to a set of highly parallel and programmable cores for more general purpose computation,” McClanahan wrote. “The trend in GPU technology has no doubt been to keep adding more programmability and parallelism to a GPU core architecture that is ever evolving towards a general purpose more CPU-like core.”

    He argues that the CPU and GPU will eventually merge. In the meantime, developers are taking advantage of increasingly powerful and flexible GPUs for a variety of applications, from modeling physical systems to beefing up smart phones. Companies ranging from music startup Shazam to online image processing outfit Ingix are taking advantage of them too. Amazon even offers GPU processing as a cloud service.

    “GPUs also have much higher memory bandwidth than CPUs, so they work better for doing relatively simple computations on large amounts of data,” Holk explains.

    There are other languages for GPU programming, including CUDA and OpenCL. In fact, Harlan actually compiles to OpenCL. But unlike these other languages, Harlan provides programming abstractions more associated with higher-level programming languages, such as Python and Ruby.

    “Another goal of the Harlan was to answer the question ‘What would we do if we started from scratch on a language and designed it from the beginning to support GPU programming?’” he says. “Most of the systems so far embed GPU programming in an existing language, which means you have to handle all the quirks of the host language. Harlan lets us make the best decisions for our target hardware and applications.”

    Harlan’s syntax is based on Scheme, a dialect of the influential programming language Lisp, which was created by artificial intelligence researcher John McCarthy in 1958. “It’s the ancestor for every good language,” Yukihiro “Matz” Matsumoto, creator of the Ruby programming language, once told SiliconAngle.

    “[Indiana University] has a rich tradition of using Scheme for its programming language work, and so we had a lot of experience writing compilers with Scheme,” Holk says. “Originally, we imagined a more C-like language, but given that we were doing so much in Scheme anyway, it made sense to evolve Harlan to be more Scheme-like.”

    But for those looking for a more “normal” programming language for doing GPU work, Holk has also been working on Rust, a programming language created by Mozilla, specifically designed for developing systems that operate at a low level, need the hardware layer. Earlier this year, he published a paper on using Rust for GPU processing.

    “Rust is concerned with making sure programmers have a sense of how their program maps to the underlying hardware,” Holk explains. But Harlan is concerned with transforming the code a programmer writes and turning it into the most efficient code possible.

    “Harlan could potentially generate better GPU code, although the code that actually runs may not have as much resemblance to what the programmer wrote,” he says. “Harlan is about pushing the limits of what’s possible, while the Rust on GPUs work is about applying those ideas in a more practical language.”

    The original article is available here.