In trying to figure out what GPUs can be used for in the future, it seems instructive to start by looking at what they currently arebeing used for.  What follows is a pretty boring and dry, but I believe important, overview of the types of things GPUs are being used for (beyond graphics). My source data is the NVIDIA CUDA site.

Disclaimer: NVIDIA isn't the only GPU that supports general processing, but it seems to have the greatest current traction. Also, the user-submitted content on the NVIDIA site certainly has its biases, but it has info on over 500 applications, so it's an interesting source for real-world examples.

High-Level Categorization

Below is a rough categorization of different types of applications. It isn't comprehensive, or even accurately categorized, but should give you a high-level feel for what is going on. s

  • Numerical / Scientific computation
    • Computational Fluid Dynamics
    • Signal Processing
    • Computational Chemistry
    • Neural Networks
    • Cryptography
    • Genetic Programming
    • Algorithms
      • Linear algebra
      • Linear optimization
      • Sparse matrix vector product
      • Gaussian mixture models
      • Stochastic differential equations
      • Fourier transforms
      • k Nearest Neighbor
      • 3D Particle Boltzmann solver
      • Parallel sorting
      • List ranking
      • Traveling salesman problem
  • Imaging
    • Medical Imaging
      • Image reconstruction
      • Image compression
    • Other
      • Ray tracing
      • Holography
  • Oil & Gas exploration
  • Finance
  • Hybrid physics / visualization
  • Gaming

Examples

The above list may give you a general notion of the types of issues, but let me dig in to a few to give you some deeper insight.

Scientific Computation: Computational Fluid Dynamics

A great way to get a sense for what scientific computing is about is to look at the Wikipedia entry for CFD.  Below is an extended excerpt. Skimming it should give you a good flavor.

Computational fluid dynamics (CFD) is one of the branches of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the millions of calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. Even with high-speed supercomputers only approximate solutions can be achieved in many cases.

...

The most fundamental consideration in CFD is how one treats a continuous fluid in a discretized fashion on a computer. One method is to discretize the spatial domain into small cells to form a volume mesh or grid, and then apply a suitable algorithm to solve the equations of motion (Euler equationsfor inviscid, and Navier-Stokes equations for viscous flow). In addition, such a mesh can be either irregular (for instance consisting of triangles in 2D, or pyramidal solids in 3D) or regular; the distinguishing characteristic of the former is that each cell must be stored separately in memory. Where shocks or discontinuities are present, high resolution schemes such as Total Variation Diminishing (TVD), Flux Corrected Transport (FCT), Essentially NonOscillatory (ENO), or MUSCL schemes are needed to avoid spurious oscillations (Gibbs phenomenon) in the solution.

If one chooses not to proceed with a mesh-based method, a number of alternatives exist, notably :

It is possible to directly solve the Navier-Stokes equations for laminar flows and for turbulent flows when all of the relevant length scales can be resolved by the grid (a Direct numerical simulation). In general however, the range of length scales appropriate to the problem is larger than even today's massively parallel computers can model. In these cases, turbulent flow simulations require the introduction of a turbulence model. Large eddy simulations(LES) and the Reynolds-averaged Navier-Stokes equations (RANS) formulation, with the k-ε model or the Reynolds stress model, are two techniques for dealing with these scales.

In many instances, other equations are solved simultaneously with the Navier-Stokes equations. These other equations can include those describing species concentration (mass transfer), chemical reactions, heat transfer, etc. More advanced codes allow the simulation of more complex cases involving multi-phase flows (e.g. liquid/gas, solid/gas, liquid/solid), non-Newtonian fluids (such as blood), or chemically reacting flows (such as combustion).

Imaging: Tomographic Reconstruction

"Tomography is imaging by sections or sectioning." For instance, when you take a CT-scan, you are taking lots of individual slices of a picture, and then you need to put the data together. "Reconstruction" is the process of putting these different slices together.

Here's what's cool. There is a trade-off between the number of slices and detail of the slices you take and the computation required to reconstruct an image. You can take less data (and hence have the patient spend less time strapped into a CT scanner, or process more patients), but then it might take days to process the data. But with GPU computing you get the best of both worlds: fast scanning and fast reconstruction. Thus, GPGPU is significantly changing what is possible.

There is a great video that talks about one example of this: http://fastra.ua.ac.be/en/index.html

Video Enhancement / Cleanup

This is such a no-brainer. Someday this will be standard.  Check it out: http://www.vreveal.com/video_demos

Performance Improvement

I'll be shocked if anyone's made it this far. I know I'd have quit. But at the risk of burying the lead, here's the cool part.

The performance improvement demonstrated with some of these CUDA applications (ie: GPGPU apps) is pretty remarkable. Some show modest improvements of 3x to 10x. Not bad, but not revolutionary. Many, however, show speedups in the 30x to 40x range. And these are compared to apps often already optimized for CPUs. And some algorithms or apps show speedups in the 100x to 300x range. That's obviously amazing.  (Though, per Amdahl's Law, if the algorithm is a small portion of the total computation time, that isn't that helpful.)

Conclusion

Not surprisingly, the majority of the effort in GPGPU to date has been in hard-core scientific and mathetmatical computations. These are the areas that lend themselves to parallel computing of floating point operations, the problems have been studied and worked on for years, and the jump to GPUs is obvious (though difficult). Yet the performance improvements can be remarkable.

I still believe my original thesis: that this sort of massive computing power will have impact for general business applications, and not just be relegated to traditional HPC-type problems.


comments powered by Disqus