@@ -131,7 +131,13 @@ lot better if this were integrated into `io.fits` directly. For example, somethi
131
131
132
132
``` python
133
133
hdulist = fits.open(filepath, use_dask = True )
134
- hdulist[0 ].data
134
+ hdulist
135
+ ```
136
+ ```
137
+ [<astropy.io.fits.hdu.image.DaskPrimaryHDU object at 0x7f1824914c10>, <astropy.io.fits.hdu.table.DaskCompImageHDU object at 0x7f1824914ee0>]
138
+ ```
139
+ ``` python
140
+ hdulist[1 ].data
135
141
```
136
142
```
137
143
dask.array<reshape, shape=(4, 490, 1000, 2560), dtype=float64, chunksize=(1, 1, 1000, 2560), chunktype=numpy.ndarray>
@@ -148,3 +154,22 @@ cases integrated into `io.fits`, and in the process to document any future
148
154
improvements that could be made to enhance performance.
149
155
150
156
### Approximate Budget
157
+
158
+ For part one, we would request a total of 150h @ $150/hour broken up into:
159
+
160
+ * 100h for removing the cfitsio dependency on ` CompImageHDU ` . This includes
161
+ reimplementing or adapting the C implementation of the tile (de)compression
162
+ algorithms, and modifying ` CompImageHDU ` to use the new astropy versions of
163
+ these algorithms rather than parsing all the data to ` cfitsio ` for loading.
164
+ * 50h for implementing a new lazy-loading property on ` CompImageHDU ` which
165
+ allows users to load parts of the whole compressed image array without having
166
+ to load and decompress the whole array.
167
+
168
+ For part two, we request another 100h @ $150/hour to implement prototype loaders
169
+ for FITS Image HDUs making use of Dask. At the end of part two we expect to have
170
+ at least one approach to efficiently loading FITS data into Dask, suitable for
171
+ large scale parallelisation, and a plan of how these prototypes could be
172
+ improved upon and merged into astropy core.
173
+
174
+ Our minimal budget is the 150h requested for part one, which would mean leaving
175
+ all the Dask work to another proposal.
0 commit comments