@@ -43,7 +43,7 @@ __ http://www.unidata.ucar.edu/software/netcdf/
43
43
.. _netCDF FAQ : http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#What-Is-netCDF
44
44
45
45
Reading and writing netCDF files with xarray requires scipy or the
46
- `netCDF4-Python `__ library to be installed (the later is required to
46
+ `netCDF4-Python `__ library to be installed (the latter is required to
47
47
read/write netCDF V4 files and use the compression options described below).
48
48
49
49
__ https://github.com/Unidata/netcdf4-python
@@ -241,7 +241,7 @@ See its docstring for more details.
241
241
.. note ::
242
242
243
243
A common use-case involves a dataset distributed across a large number of files with
244
- each file containing a large number of variables. Commonly a few of these variables
244
+ each file containing a large number of variables. Commonly, a few of these variables
245
245
need to be concatenated along a dimension (say ``"time" ``), while the rest are equal
246
246
across the datasets (ignoring floating point differences). The following command
247
247
with suitable modifications (such as ``parallel=True ``) works well with such datasets::
@@ -298,8 +298,8 @@ library::
298
298
combined = read_netcdfs('/all/my/files/*.nc', dim='time')
299
299
300
300
This function will work in many cases, but it's not very robust. First, it
301
- never closes files, which means it will fail one you need to load more than
302
- a few thousands file . Second, it assumes that you want all the data from each
301
+ never closes files, which means it will fail if you need to load more than
302
+ a few thousand files . Second, it assumes that you want all the data from each
303
303
file and that it can all fit into memory. In many situations, you only need
304
304
a small subset or an aggregated summary of the data from each file.
305
305
@@ -351,7 +351,7 @@ default encoding, or the options in the ``encoding`` attribute, if set.
351
351
This works perfectly fine in most cases, but encoding can be useful for
352
352
additional control, especially for enabling compression.
353
353
354
- In the file on disk, these encodings as saved as attributes on each variable, which
354
+ In the file on disk, these encodings are saved as attributes on each variable, which
355
355
allow xarray and other CF-compliant tools for working with netCDF files to correctly
356
356
read the data.
357
357
@@ -364,7 +364,7 @@ These encoding options work on any version of the netCDF file format:
364
364
or ``'float32' ``. This controls the type of the data written on disk.
365
365
- ``_FillValue ``: Values of ``NaN `` in xarray variables are remapped to this value when
366
366
saved on disk. This is important when converting floating point with missing values
367
- to integers on disk, because ``NaN `` is not a valid value for integer dtypes. As a
367
+ to integers on disk, because ``NaN `` is not a valid value for integer dtypes. By
368
368
default, variables with float types are attributed a ``_FillValue `` of ``NaN `` in the
369
369
output file, unless explicitly disabled with an encoding ``{'_FillValue': None} ``.
370
370
- ``scale_factor `` and ``add_offset ``: Used to convert from encoded data on disk to
@@ -406,8 +406,8 @@ If character arrays are used:
406
406
by setting the ``_Encoding `` field in ``encoding ``. But
407
407
`we don't recommend it <http://utf8everywhere.org/ >`_.
408
408
- The character dimension name can be specifed by the ``char_dim_name `` field of a variable's
409
- ``encoding ``. If this is not specified the default name for the character dimension is
410
- ``'string%s' % data.shape[-1] ``. When decoding character arrays from existing files, the
409
+ ``encoding ``. If the name of the character dimension is not specified, the default is
410
+ ``f 'string{ data.shape[-1]}' ``. When decoding character arrays from existing files, the
411
411
``char_dim_name `` is added to the variables ``encoding `` to preserve if encoding happens, but
412
412
the field can be edited by the user.
413
413
506
506
The Iris _ tool allows easy reading of common meteorological and climate model formats
507
507
(including GRIB and UK MetOffice PP files) into ``Cube `` objects which are in many ways very
508
508
similar to ``DataArray `` objects, while enforcing a CF-compliant data model. If iris is
509
- installed xarray can convert a ``DataArray `` into a ``Cube `` using
509
+ installed, xarray can convert a ``DataArray `` into a ``Cube `` using
510
510
:py:meth: `DataArray.to_iris `:
511
511
512
512
.. ipython :: python
@@ -716,7 +716,7 @@ require external libraries and dicts can easily be pickled, or converted to
716
716
json, or geojson. All the values are converted to lists, so dicts might
717
717
be quite large.
718
718
719
- To export just the dataset schema, without the data itself, use the
719
+ To export just the dataset schema without the data itself, use the
720
720
``data=False `` option:
721
721
722
722
.. ipython :: python
@@ -772,7 +772,7 @@ for an example of how to convert these to longitudes and latitudes.
772
772
.. warning ::
773
773
774
774
This feature has been added in xarray v0.9.6 and should still be
775
- considered as being experimental. Please report any bug you may find
775
+ considered experimental. Please report any bugs you may find
776
776
on xarray's github repository.
777
777
778
778
@@ -828,7 +828,7 @@ GDAL readable raster data using `rasterio`_ as well as for exporting to a geoTIF
828
828
Zarr
829
829
----
830
830
831
- `Zarr `_ is a Python package providing an implementation of chunked, compressed,
831
+ `Zarr `_ is a Python package that provides an implementation of chunked, compressed,
832
832
N-dimensional arrays.
833
833
Zarr has the ability to store arrays in a range of ways, including in memory,
834
834
in files, and in cloud-based object storage such as `Amazon S3 `_ and
@@ -846,7 +846,7 @@ At this time, xarray can only open zarr datasets that have been written by
846
846
xarray. For implementation details, see :ref: `zarr_encoding `.
847
847
848
848
To write a dataset with zarr, we use the :py:attr: `Dataset.to_zarr ` method.
849
- To write to a local directory, we pass a path to a directory
849
+ To write to a local directory, we pass a path to a directory:
850
850
851
851
.. ipython :: python
852
852
:suppress:
@@ -1045,7 +1045,7 @@ formats supported by PseudoNetCDF_, if PseudoNetCDF is installed.
1045
1045
PseudoNetCDF can also provide Climate Forecasting Conventions to
1046
1046
CMAQ files. In addition, PseudoNetCDF can automatically register custom
1047
1047
readers that subclass PseudoNetCDF.PseudoNetCDFFile. PseudoNetCDF can
1048
- identify readers heuristically, or format can be specified via a key in
1048
+ identify readers either heuristically, or by a format specified via a key in
1049
1049
`backend_kwargs `.
1050
1050
1051
1051
To use PseudoNetCDF to read such files, supply
0 commit comments