Chunking of Large Multidimensional Arrays
Rotem, Doron
Otoo, Ekow J.
Seshadri, Sridhar
United States. Department of Energy. Office of Advanced Scientific Computing Research.
Lawrence Berkeley National Laboratory
2007-02-28
English
Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.
Efficiency
Processing
Programming
Mathematical Models
Multi-Dimensional Arrays Algorithm Array Chunking
99
Exact Solutions
Storage Multi-Dimensional Arrays Algorithm Array Chunking
Metrics
Report
Text
rep-no: LBNL--63230
grantno: DE-AC02-05CH11231
doi: 10.2172/927033
osti: 927033
https://digital.library.unt.edu/ark:/67531/metadc896932/
ark: ark:/67531/metadc896932