Skip to content

Commit af71c84

Browse files
authored
Fix grammar and typos in the Reading and writing files guide (#4553)
* Fix grammar and typos in io.rst * Add whats-new entry
1 parent ac77b24 commit af71c84

File tree

2 files changed

+16
-14
lines changed

2 files changed

+16
-14
lines changed

doc/io.rst

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ __ http://www.unidata.ucar.edu/software/netcdf/
4343
.. _netCDF FAQ: http://www.unidata.ucar.edu/software/netcdf/docs/faq.html#What-Is-netCDF
4444

4545
Reading and writing netCDF files with xarray requires scipy or the
46-
`netCDF4-Python`__ library to be installed (the later is required to
46+
`netCDF4-Python`__ library to be installed (the latter is required to
4747
read/write netCDF V4 files and use the compression options described below).
4848

4949
__ https://github.com/Unidata/netcdf4-python
@@ -241,7 +241,7 @@ See its docstring for more details.
241241
.. note::
242242

243243
A common use-case involves a dataset distributed across a large number of files with
244-
each file containing a large number of variables. Commonly a few of these variables
244+
each file containing a large number of variables. Commonly, a few of these variables
245245
need to be concatenated along a dimension (say ``"time"``), while the rest are equal
246246
across the datasets (ignoring floating point differences). The following command
247247
with suitable modifications (such as ``parallel=True``) works well with such datasets::
@@ -298,8 +298,8 @@ library::
298298
combined = read_netcdfs('/all/my/files/*.nc', dim='time')
299299

300300
This function will work in many cases, but it's not very robust. First, it
301-
never closes files, which means it will fail one you need to load more than
302-
a few thousands file. Second, it assumes that you want all the data from each
301+
never closes files, which means it will fail if you need to load more than
302+
a few thousand files. Second, it assumes that you want all the data from each
303303
file and that it can all fit into memory. In many situations, you only need
304304
a small subset or an aggregated summary of the data from each file.
305305

@@ -351,7 +351,7 @@ default encoding, or the options in the ``encoding`` attribute, if set.
351351
This works perfectly fine in most cases, but encoding can be useful for
352352
additional control, especially for enabling compression.
353353

354-
In the file on disk, these encodings as saved as attributes on each variable, which
354+
In the file on disk, these encodings are saved as attributes on each variable, which
355355
allow xarray and other CF-compliant tools for working with netCDF files to correctly
356356
read the data.
357357

@@ -364,7 +364,7 @@ These encoding options work on any version of the netCDF file format:
364364
or ``'float32'``. This controls the type of the data written on disk.
365365
- ``_FillValue``: Values of ``NaN`` in xarray variables are remapped to this value when
366366
saved on disk. This is important when converting floating point with missing values
367-
to integers on disk, because ``NaN`` is not a valid value for integer dtypes. As a
367+
to integers on disk, because ``NaN`` is not a valid value for integer dtypes. By
368368
default, variables with float types are attributed a ``_FillValue`` of ``NaN`` in the
369369
output file, unless explicitly disabled with an encoding ``{'_FillValue': None}``.
370370
- ``scale_factor`` and ``add_offset``: Used to convert from encoded data on disk to
@@ -406,8 +406,8 @@ If character arrays are used:
406406
by setting the ``_Encoding`` field in ``encoding``. But
407407
`we don't recommend it <http://utf8everywhere.org/>`_.
408408
- The character dimension name can be specifed by the ``char_dim_name`` field of a variable's
409-
``encoding``. If this is not specified the default name for the character dimension is
410-
``'string%s' % data.shape[-1]``. When decoding character arrays from existing files, the
409+
``encoding``. If the name of the character dimension is not specified, the default is
410+
``f'string{data.shape[-1]}'``. When decoding character arrays from existing files, the
411411
``char_dim_name`` is added to the variables ``encoding`` to preserve if encoding happens, but
412412
the field can be edited by the user.
413413

@@ -506,7 +506,7 @@ Iris
506506
The Iris_ tool allows easy reading of common meteorological and climate model formats
507507
(including GRIB and UK MetOffice PP files) into ``Cube`` objects which are in many ways very
508508
similar to ``DataArray`` objects, while enforcing a CF-compliant data model. If iris is
509-
installed xarray can convert a ``DataArray`` into a ``Cube`` using
509+
installed, xarray can convert a ``DataArray`` into a ``Cube`` using
510510
:py:meth:`DataArray.to_iris`:
511511

512512
.. ipython:: python
@@ -716,7 +716,7 @@ require external libraries and dicts can easily be pickled, or converted to
716716
json, or geojson. All the values are converted to lists, so dicts might
717717
be quite large.
718718

719-
To export just the dataset schema, without the data itself, use the
719+
To export just the dataset schema without the data itself, use the
720720
``data=False`` option:
721721

722722
.. ipython:: python
@@ -772,7 +772,7 @@ for an example of how to convert these to longitudes and latitudes.
772772
.. warning::
773773

774774
This feature has been added in xarray v0.9.6 and should still be
775-
considered as being experimental. Please report any bug you may find
775+
considered experimental. Please report any bugs you may find
776776
on xarray's github repository.
777777

778778

@@ -828,7 +828,7 @@ GDAL readable raster data using `rasterio`_ as well as for exporting to a geoTIF
828828
Zarr
829829
----
830830

831-
`Zarr`_ is a Python package providing an implementation of chunked, compressed,
831+
`Zarr`_ is a Python package that provides an implementation of chunked, compressed,
832832
N-dimensional arrays.
833833
Zarr has the ability to store arrays in a range of ways, including in memory,
834834
in files, and in cloud-based object storage such as `Amazon S3`_ and
@@ -846,7 +846,7 @@ At this time, xarray can only open zarr datasets that have been written by
846846
xarray. For implementation details, see :ref:`zarr_encoding`.
847847

848848
To write a dataset with zarr, we use the :py:attr:`Dataset.to_zarr` method.
849-
To write to a local directory, we pass a path to a directory
849+
To write to a local directory, we pass a path to a directory:
850850

851851
.. ipython:: python
852852
:suppress:
@@ -1045,7 +1045,7 @@ formats supported by PseudoNetCDF_, if PseudoNetCDF is installed.
10451045
PseudoNetCDF can also provide Climate Forecasting Conventions to
10461046
CMAQ files. In addition, PseudoNetCDF can automatically register custom
10471047
readers that subclass PseudoNetCDF.PseudoNetCDFFile. PseudoNetCDF can
1048-
identify readers heuristically, or format can be specified via a key in
1048+
identify readers either heuristically, or by a format specified via a key in
10491049
`backend_kwargs`.
10501050

10511051
To use PseudoNetCDF to read such files, supply

doc/whats-new.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -70,6 +70,8 @@ Documentation
7070
By `Pieter Gijsbers <https://github.com/pgijsbers>`_.
7171
- Fix grammar and typos in the :doc:`contributing` guide (:pull:`4545`).
7272
By `Sahid Velji <https://github.com/sahidvelji>`_.
73+
- Fix grammar and typos in the :doc:`io` guide (:pull:`4553`).
74+
By `Sahid Velji <https://github.com/sahidvelji>`_.
7375

7476
Internal Changes
7577
~~~~~~~~~~~~~~~~

0 commit comments

Comments
 (0)