Craig C. Douglas
Yale University: Computer Science
on sabbatical at Texas A&M: Mathematics, Computer Science,
and the Institute for Scientific Computation
Wednesday, February 20, 2008
Mesa Laboratory, Main Seminar Room
The Long and Winding Road from Simple Algorithm Analysis to Large-Scale Data-Driven Multidisciplinary Research or
How to Use an Infinite Number of Cores (Processors)
Once upon a time, in a bygone era a long, long time ago, I developed algorithms for computers with only one central processing unit that filled a room (circa 1970's-1980's). Theorems could be proven showing the convergence of the algorithms and work estimates based on the number of floating point multiplies actually gave a good estimate of the run time of computer programs. Multigrid algorithms, clever tricks for preconditioners that re-used partial calculations, and domain decomposition methods for problems that were better solved in pieces rather than as a whole were common problems of the era. That was then.
In the meantime, computers became far more complicated. Memory speeds did not keep pace with CPU clock increases, causing many algorithms to run proportionally to the time of a cache line miss in a hierarchical memory system. Worse, cache misses are not repeatable in general, so the run time of an algorithm is much harder to predict. I developed a number of hardware assisted multigrid and iterative method algorithms that take advantage of cache memories of computers that work for partial differential equation based problems in any number of dimensions with either structured or unstructured meshes. A spectral element ocean model provided much of the motivation for this research.
In 2000 I decided that networks of sensors, data assimilation, and self correcting long running codes on as many processors as can be powered up and connected through a Grid or placed in a large warehouse (i.e., a forest of processors) could solve some long standing problems of interest and provide interesting mathematical and computational science areas to work on for the rest of my career. Some examples include
I will briefly touch on some of these topics during my lecture.
- Reservoir simulation
- Detecting blowup conditions in wells before a blowup occurs
- Contaminant backtracking to identify polluters
- Disaster management, e.g., wildland fires
- Pharmaceutical manufacturing problems
- Smart houses
- Rescheduling flights for very frequent fliers
Two variations of this talk will be given: one at NCAR on 2/20/2008 and another at the University of Wyoming on 2/21/2008.
This talk has an enormous number of co-authors, who I will try to identify, area by area. The research has been supported in part by the National Science Foundation, the Department of Energy, CERFACS and the European Union, IBM Research Division, my universities, www.MGNet.org, and www.DDDAS.org.