PhotoObjAll
100000
(gdb) p chunk.size
$20 = 800072
800072=0.76300811767578125MB
5.2. Chunk Size Selection
The selection of chunk size in a dimension plays an important role in how well you can query your data. If a chunk size is too large or too small, it will negatively impact performance.
To optimize performance of your SciDB array, you want each chunk to contain roughly 10 to 20 MB of data. So, for example, if your data set consists entirely of double-precision numbers, you would want a chunk size that contains somewhere between 500,000 and 1 million elements (assuming 8 bytes for every double-precision number).
When a multi-attribute SciDB array is stored, the array attributes are stored in different chunks, a process known as vertical partitioning. This is a consideration when you are choosing a chunk size. The size of an individual cell, or the number of attributes per cell, does not determine the total chunk size. Rather, the number of cells in the chunk is the number to use for determining chunk size. For arrays where every dimension has a fixed number of cells and every cell has a value you can do a straightforward calculation to find the correct chunk size.
When the density of the data in a data set is highly skewed, that is, when the data is not evenly distributed along array dimensions, the calculation of chunk size becomes more difficult. The calculation is particularly difficult when it isn't known at array creation time how skewed the data is. In this case, you may want to use the repartitioning functionality of SciDB to change the chunk size as necessary. Repartitioning an array is explained in Chapter 10, Changing Array Schemas.
http://paradigm4.com/HTMLmanual/13.3/scidb_ug/ch04s05s02.html