Proceedings of GIS/LIS'97
Parallel processing hardware is becoming increasingly affordable, with 4processor, Pentium-based servers near the $10,000 price point. Prior research has shown that a number of routine GIS spatial analysis tasks can be performed in parallel with high efficiency. The purpose of this paper is to investigate one such application, the parallel computation of interpolated surfaces using the neighborhood-based, inverse-distance-weighted Clarke algorithm, and to examine the effects of different distributions of control points on compute time. These data distributions include: evenly spaced, random, loosely clustered, and tightly clustered (as measured by the nearest neighbor statistic). Following a review of non-parallel performance characteristics of the Clarke algorithm, the programming scheme used to achieve parallelism is outlined, and two strategies for subdividing the spatial interpolation task among the parallel processes are described. Tests carried out on a Silicon Graphics Power Challenge computer show that while parallel interpolation is effective at reducing run time, the choice of domain decomposition strategy significantly impacts the amount of reduction when processing clustered data. Our results point to a generic strategy that can be used to apportion spatially inhomogeneous tasks in multiprocessing environments.
Journal Article Version
Version of Record
Published Article/Book Citation
Cramer, B. and Armstrong, M.P. Interpolation of spatially inhomogenous data sets: An evaluation of parallel computation approaches. Proceedings of GIS/LIS’97. Bethesda, MD: American Congress on Surveying and Mapping
Copyright © 1997 Barton E. Cramer and Marc P. Armstrong