Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/changelog_fragments/68.feat.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@ The :class:`ncdata.NcData` objects can be indexed with the ``[]`` operation, or
specifed dimensions with the :meth:`~ncdata.NcData.slicer` method.
This is based on the new :meth:`~ncdata.utils.index_by_dimensions()` utility method
and :class:`~ncdata.utils.Slicer` class.
See: :ref:`indexing_overview`
See: :ref:`utils_indexing`
8 changes: 7 additions & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,13 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
exclude_patterns = [
"_build",
"Thumbs.db",
".DS_Store",
"changelog_fragments",
"details/api/modules.rst",
]


# -- Options for HTML output -------------------------------------------------
Expand Down
39 changes: 27 additions & 12 deletions docs/details/developer_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,21 @@ A new change-note fragment file should be included in each PR, but is normally c
with a ``towncrier`` command-line command:

* shortly, with ``towncrier create --content "mynotes..." <ISSUE-num>.<category>.rst``
* ... or for longer forms, use ``towncrier create --edit``.
* Here, "<category>" is one of feat/doc/bug/dev/misc. Which are: user features;
bug fixes; documentation changes; general developer-relevant changes;
or "miscellaneous".

... or, for longer content, use ``towncrier create --edit``.

* Here, "<category>" is one of:

* "feat": user features
* "doc": documentation changes
* "bug": bug fixes
* "def": general developer-relevant changes
* "misc": miscellaneous

(For reference, these categories are configured in ``pyproject.toml``).

* the fragment files are stored in ``docs/changelog_fragments``.

* N.B. for this to work well, every change should be identified with a matching github issue.
If there are multiple associated PRs, they should all be linked to the issue.

Expand All @@ -26,33 +35,36 @@ Documentation build

For a full docs-build:

* a simple ``$ make html`` will do for now
* The most useful way is simply ``$ cd docs`` and ``$ make html-keeplog``.
* Note: the plainer ``$ make html`` is the same, but "-keeplog", in addition, preserves the
changelog fragments **and** reverts the change_log.rst after the html build:
This stops you accidentally including a "built" changelog when making further commits.
* The ``docs/Makefile`` wipes the API docs and invokes sphinx-apidoc for a full rebuild
* It also calls towncrier to clear out the changelog fragments + update ``docs/change_log.rst``.
This should be reverted before pushing your PR -- i.e. leave changenotes in the fragments.
* the results is then available at ``docs/_build/html/index.html``.
* ( *assuming "-keeplog"*: fragments and change_notes.rst are then reverted, undoing the towncrier build ).
* the result is then available at ``docs/_build/html/index.html``.

.. note::

* the above is just for *local testing*, if required.
* the above is just for **local testing**, when required.
* For PRs (and releases), we also provide *automatic* builds on GitHub,
via `ReadTheDocs <https://readthedocs.org/projects/ncdata/>`_
via ReadTheDocs_.


Release actions
---------------

#. Update the :ref:`change-log page <change_log>` in the details section

#. ensure all major changes + PRs are referenced in the :ref:`change_notes` section.
#. start with ``$ towncrier build``

* The starting point for this is now just : ``$ towncrier build``.
#. ensure all major changes + PRs are referenced in the :ref:`change_notes` section.

#. update the "latest version" stated in the :ref:`development_status` section

#. Cut a release on GitHub

* this triggers a new docs version on `ReadTheDocs <https://readthedocs.org/projects/ncdata>`_.
* this triggers a new docs version on ReadTheDocs_.

#. Build the distribution

Expand Down Expand Up @@ -109,3 +121,6 @@ Release actions

* wait a few hours..
* check that the new version appears in the output of ``$ conda search ncdata``


.. _ReadTheDocs: https://readthedocs.org/projects/ncdata
50 changes: 45 additions & 5 deletions docs/userdocs/user_guide/common_operations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,19 @@ Example :

>>> dataset.variables["x"].avals["units"] = "m s-1"


There is also an :meth:`~ncdata.NameMap.addall` method, which adds multiple content
objects in one operation.

.. doctest:: python

>>> vars = [NcVariable(name) for name in ("a", "b", "c")]
>>> dataset.variables.addall(vars)
>>> list(dataset.variables)
['x', 'a', 'b', 'c']

.. _operations_rename:

Rename
------
A component can be renamed with the :meth:`~ncdata.NameMap.rename` method. This changes
Expand All @@ -67,6 +80,18 @@ Example :

>>> dataset.variables.rename("x", "y")

result:

.. doctest:: python

>>> print(dataset.variables.get("x"))
None
>>> print(dataset.variables.get("y"))
<NcVariable(<no-dtype>): y()
y:units = 'm s-1'
>


.. warning::
Renaming a dimension will not rename references to it (i.e. in variables), which
obviously may cause problems.
Expand Down Expand Up @@ -123,14 +148,29 @@ Equality Testing
----------------
We implement equality operations ``==`` / ``!=`` for all the core data objects.

However, simple equality testing on :class:`@ncdata.NcData` and :class:`@ncdata.NcVariable`
objects can be very costly if it requires comparing large data arrays.
.. doctest::

>>> vA = dataset.variables["a"]
>>> vB = dataset.variables["b"]
>>> vA == vB
False

.. doctest::

>>> dataset == dataset.copy()
True

.. warning::
Equality testing for :class:`~ncdata.NcData` and :class:`~ncdata.NcVariable` actually
calls the :func:`ncdata.utils.dataset_differences` and
:func:`ncdata.utils.variable_differences` utilities.

This can be very costly if it needs to compare large data arrays.

If you need to avoid comparing large (and possibly lazy) arrays then you can use the
:func:`ncdata.utils.dataset_differences` and
:func:`ncdata.utils.variable_differences` utility functions.
These functions also provide multiple options to enable more tolerant comparison,
such as allowing variables to have a different ordering.
:func:`ncdata.utils.variable_differences` utility functions directly instead.
These provide a ``check_var_data=False`` option, to ignore differences in data content.

See: :ref:`utils_equality`

Expand Down
34 changes: 18 additions & 16 deletions docs/userdocs/user_guide/data_objects.rst
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,9 @@ However, for most operations on attributes, it is much easier to use the ``.aval
property instead. This accesses *the same attributes*, but in the form of a simple
"name: value" dictionary.

Thus for example, to fetch an attribute you would usually write just :
Get attribute value
^^^^^^^^^^^^^^^^^^^
For example, to fetch an attribute you would usually write just :

.. testsetup::

Expand All @@ -205,23 +207,15 @@ and **not** :

.. doctest:: python

>>> # WRONG: this reads an NcAttribute, not its value
>>> # WRONG: this get the NcAttribute object, not its value
>>> unit = dataset.variables["x"].attributes["units"]

or:

.. doctest:: python

>>> # WRONG: this gets NcAttribute.value as a character array, not a string
>>> # WRONG: this returns a character array, not a string
>>> unit = dataset.variables["x"].attributes["units"].value

or even (which is at least correct):

.. doctest:: python

>>> unit = dataset.variables["x"].attributes["units"].as_python_value()


Set attribute value
^^^^^^^^^^^^^^^^^^^
Likewise, to **set** a value, you would normally just

.. doctest:: python
Expand All @@ -236,9 +230,11 @@ and **not**
>>> dataset.variables["x"].attributes["units"].value = "K"


Note also, that as the ``.avals`` is a dictionary, you can use standard dictionary
methods such as ``update`` and ``get`` to perform other operations in a relatively
natural, Pythonic way.
``.avals`` as a dictionary
^^^^^^^^^^^^^^^^^^^^^^^^^^
Note also, that as ``.avals`` is a dictionary, you can use standard dictionary
methods such as ``pop``, ``update`` and ``get`` to perform other operations in a
relatively natural, Pythonic way.

.. doctest:: python

Expand All @@ -247,6 +243,12 @@ natural, Pythonic way.

>>> dataset.attributes.update({"experiment": "A407", "expt_run": 704})

.. note::
The new ``.avals`` property effectively replaces the old
:meth:`~ncdata.NcData.get_attrval` and :meth:`~ncdata.NcData.set_attrval` methods,
which are now deprecated and will eventually be removed.


.. _data-constructors:

Core Object Constructors
Expand Down
52 changes: 36 additions & 16 deletions docs/userdocs/user_guide/howtos.rst
Original file line number Diff line number Diff line change
Expand Up @@ -377,8 +377,8 @@ See: :ref:`copy_notes`

Extract a subsection by indexing
--------------------------------
The nicest way is usually just to use the :meth:`~ncdata.Ncdata.slicer` method to specify
dimensions to index, and then index the result.
The nicest way is usually to use the NcData :meth:`~ncdata.Ncdata.slicer` method to
specify dimensions to index, and then index the result.

.. testsetup::

Expand All @@ -388,22 +388,22 @@ dimensions to index, and then index the result.
>>> for nn, dim in full_data.dimensions.items():
... full_data.variables.add(NcVariable(nn, dimensions=[nn], data=np.arange(dim.size)))

.. doctest::

>>> for dimname in full_data.dimensions:
... print(dimname, ':', full_data.variables[dimname].data)
x : [0 1 2 3 4 5 6]
y : [0 1 2 3 4 5]

.. doctest::

>>> data_region = full_data.slicer("y", "x")[3, 1::2]

effect:

.. doctest::

>>> for dimname in full_data.dimensions:
... print("(original)", dimname, ':', full_data.variables[dimname].data)
(original) x : [0 1 2 3 4 5 6]
(original) y : [0 1 2 3 4 5]

>>> for dimname in data_region.dimensions:
... print(dimname, ':', data_region.variables[dimname].data)
x : [1 3 5]
... print("(new)", dimname, ':', data_region.variables[dimname].data)
(new) x : [1 3 5]

You can also slice data directly, which simply acts on the dimensions in order:

Expand All @@ -413,7 +413,7 @@ You can also slice data directly, which simply acts on the dimensions in order:
>>> data_region_2 == data_region
True

See: :ref:`indexing_overview`
See: :ref:`utils_indexing`


Read data from a NetCDF file
Expand Down Expand Up @@ -454,8 +454,8 @@ Use the ``dim_chunks`` argument in the :func:`ncdata.netcdf4.from_nc4` function

>>> from ncdata.netcdf4 import from_nc4
>>> ds = from_nc4(filepath, dim_chunks={"time": 3})
>>> print(ds.variables["time"].data.chunksize)
(3,)
>>> print(ds.variables["time"].data.chunks)
((3, 3, 3, 1),)


Save data to a new file
Expand Down Expand Up @@ -531,8 +531,28 @@ Use :func:`ncdata.xarray.to_xarray` and :func:`ncdata.xarray.from_xarray`.
>>> from ncdata.xarray import from_xarray, to_xarray
>>> dataset = xarray.open_dataset(filepath)
>>> ncdata = from_xarray(dataset)
>>>

>>> print(ncdata)
<NcData: <'no-name'>
variables:
<NcVariable(float64): vx()
vx:units = 'm.s-1'
vx:q = 4.2
vx:_FillValue = nan
>
<BLANKLINE>
global attributes:
:experiment = 'A301.7'
>

>>> ds2 = to_xarray(ncdata)
>>> print(ds2)
<xarray.Dataset> Size: 8B
Dimensions: ()
Data variables:
vx float64 8B nan
Attributes:
experiment: A301.7

Note that:

Expand Down Expand Up @@ -573,7 +593,7 @@ passed using specific dictionary keywords, e.g.
... iris_load_kwargs={'constraints': 'air_temperature'},
... xr_save_kwargs={'unlimited_dims': ('time',)},
... )
...


Combine data from different input files into one output
-------------------------------------------------------
Expand Down
Loading
Loading