Skip to content

chunked reading of files #34

@GMoncrieff

Description

@GMoncrieff

Currently reading files using emit_xarray from emit_tools.py reads into a nd.array backed xr.dataset. An option to read into a chunked dask.array backed xr.dataset would help prevent out-of-memory errors when reading on machines with limited memory (loading failed on an 8GB SMCE machine) and potentially speed up operations on downstream operations using dask.

Adding chunks='auto' to

ds = xr.open_dataset(filepath,engine = engine)
works when ortho=False but not for ortho=True

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions