Skip to content

Commit 86ab1ea

Browse files
Multiple docs improvements: fix warnings + increase cross-referencing. (#160)
* Generalise testing + extend to Slicer tests. * Multiple docs improvements: fix warnings + increase cross-referencing. * Improve docs-build account in developer notes. * Small fixes. * Further fixes and tweaks. * More improvements. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
1 parent d24e7b9 commit 86ab1ea

File tree

12 files changed

+247
-94
lines changed

12 files changed

+247
-94
lines changed

docs/changelog_fragments/68.feat.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@ The :class:`ncdata.NcData` objects can be indexed with the ``[]`` operation, or
33
specifed dimensions with the :meth:`~ncdata.NcData.slicer` method.
44
This is based on the new :meth:`~ncdata.utils.index_by_dimensions()` utility method
55
and :class:`~ncdata.utils.Slicer` class.
6-
See: :ref:`indexing_overview`
6+
See: :ref:`utils_indexing`

docs/conf.py

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,13 @@
9393
# List of patterns, relative to source directory, that match files and
9494
# directories to ignore when looking for source files.
9595
# This pattern also affects html_static_path and html_extra_path.
96-
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
96+
exclude_patterns = [
97+
"_build",
98+
"Thumbs.db",
99+
".DS_Store",
100+
"changelog_fragments",
101+
"details/api/modules.rst",
102+
]
97103

98104

99105
# -- Options for HTML output -------------------------------------------------

docs/details/developer_notes.rst

Lines changed: 27 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,21 @@ A new change-note fragment file should be included in each PR, but is normally c
1111
with a ``towncrier`` command-line command:
1212

1313
* shortly, with ``towncrier create --content "mynotes..." <ISSUE-num>.<category>.rst``
14-
* ... or for longer forms, use ``towncrier create --edit``.
15-
* Here, "<category>" is one of feat/doc/bug/dev/misc. Which are: user features;
16-
bug fixes; documentation changes; general developer-relevant changes;
17-
or "miscellaneous".
14+
15+
... or, for longer content, use ``towncrier create --edit``.
16+
17+
* Here, "<category>" is one of:
18+
19+
* "feat": user features
20+
* "doc": documentation changes
21+
* "bug": bug fixes
22+
* "def": general developer-relevant changes
23+
* "misc": miscellaneous
24+
1825
(For reference, these categories are configured in ``pyproject.toml``).
26+
1927
* the fragment files are stored in ``docs/changelog_fragments``.
28+
2029
* N.B. for this to work well, every change should be identified with a matching github issue.
2130
If there are multiple associated PRs, they should all be linked to the issue.
2231

@@ -26,33 +35,36 @@ Documentation build
2635

2736
For a full docs-build:
2837

29-
* a simple ``$ make html`` will do for now
38+
* The most useful way is simply ``$ cd docs`` and ``$ make html-keeplog``.
39+
* Note: the plainer ``$ make html`` is the same, but "-keeplog", in addition, preserves the
40+
changelog fragments **and** reverts the change_log.rst after the html build:
41+
This stops you accidentally including a "built" changelog when making further commits.
3042
* The ``docs/Makefile`` wipes the API docs and invokes sphinx-apidoc for a full rebuild
3143
* It also calls towncrier to clear out the changelog fragments + update ``docs/change_log.rst``.
32-
This should be reverted before pushing your PR -- i.e. leave changenotes in the fragments.
33-
* the results is then available at ``docs/_build/html/index.html``.
44+
* ( *assuming "-keeplog"*: fragments and change_notes.rst are then reverted, undoing the towncrier build ).
45+
* the result is then available at ``docs/_build/html/index.html``.
3446

3547
.. note::
3648

37-
* the above is just for *local testing*, if required.
49+
* the above is just for **local testing**, when required.
3850
* For PRs (and releases), we also provide *automatic* builds on GitHub,
39-
via `ReadTheDocs <https://readthedocs.org/projects/ncdata/>`_
51+
via ReadTheDocs_.
4052

4153

4254
Release actions
4355
---------------
4456

4557
#. Update the :ref:`change-log page <change_log>` in the details section
4658

47-
#. ensure all major changes + PRs are referenced in the :ref:`change_notes` section.
59+
#. start with ``$ towncrier build``
4860

49-
* The starting point for this is now just : ``$ towncrier build``.
61+
#. ensure all major changes + PRs are referenced in the :ref:`change_notes` section.
5062

5163
#. update the "latest version" stated in the :ref:`development_status` section
5264

5365
#. Cut a release on GitHub
5466

55-
* this triggers a new docs version on `ReadTheDocs <https://readthedocs.org/projects/ncdata>`_.
67+
* this triggers a new docs version on ReadTheDocs_.
5668

5769
#. Build the distribution
5870

@@ -109,3 +121,6 @@ Release actions
109121

110122
* wait a few hours..
111123
* check that the new version appears in the output of ``$ conda search ncdata``
124+
125+
126+
.. _ReadTheDocs: https://readthedocs.org/projects/ncdata

docs/userdocs/user_guide/common_operations.rst

Lines changed: 45 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,19 @@ Example :
5555

5656
>>> dataset.variables["x"].avals["units"] = "m s-1"
5757

58+
59+
There is also an :meth:`~ncdata.NameMap.addall` method, which adds multiple content
60+
objects in one operation.
61+
62+
.. doctest:: python
63+
64+
>>> vars = [NcVariable(name) for name in ("a", "b", "c")]
65+
>>> dataset.variables.addall(vars)
66+
>>> list(dataset.variables)
67+
['x', 'a', 'b', 'c']
68+
69+
.. _operations_rename:
70+
5871
Rename
5972
------
6073
A component can be renamed with the :meth:`~ncdata.NameMap.rename` method. This changes
@@ -67,6 +80,18 @@ Example :
6780

6881
>>> dataset.variables.rename("x", "y")
6982

83+
result:
84+
85+
.. doctest:: python
86+
87+
>>> print(dataset.variables.get("x"))
88+
None
89+
>>> print(dataset.variables.get("y"))
90+
<NcVariable(<no-dtype>): y()
91+
y:units = 'm s-1'
92+
>
93+
94+
7095
.. warning::
7196
Renaming a dimension will not rename references to it (i.e. in variables), which
7297
obviously may cause problems.
@@ -123,14 +148,29 @@ Equality Testing
123148
----------------
124149
We implement equality operations ``==`` / ``!=`` for all the core data objects.
125150

126-
However, simple equality testing on :class:`@ncdata.NcData` and :class:`@ncdata.NcVariable`
127-
objects can be very costly if it requires comparing large data arrays.
151+
.. doctest::
152+
153+
>>> vA = dataset.variables["a"]
154+
>>> vB = dataset.variables["b"]
155+
>>> vA == vB
156+
False
157+
158+
.. doctest::
159+
160+
>>> dataset == dataset.copy()
161+
True
162+
163+
.. warning::
164+
Equality testing for :class:`~ncdata.NcData` and :class:`~ncdata.NcVariable` actually
165+
calls the :func:`ncdata.utils.dataset_differences` and
166+
:func:`ncdata.utils.variable_differences` utilities.
167+
168+
This can be very costly if it needs to compare large data arrays.
128169

129170
If you need to avoid comparing large (and possibly lazy) arrays then you can use the
130171
:func:`ncdata.utils.dataset_differences` and
131-
:func:`ncdata.utils.variable_differences` utility functions.
132-
These functions also provide multiple options to enable more tolerant comparison,
133-
such as allowing variables to have a different ordering.
172+
:func:`ncdata.utils.variable_differences` utility functions directly instead.
173+
These provide a ``check_var_data=False`` option, to ignore differences in data content.
134174

135175
See: :ref:`utils_equality`
136176

docs/userdocs/user_guide/data_objects.rst

Lines changed: 18 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,9 @@ However, for most operations on attributes, it is much easier to use the ``.aval
186186
property instead. This accesses *the same attributes*, but in the form of a simple
187187
"name: value" dictionary.
188188

189-
Thus for example, to fetch an attribute you would usually write just :
189+
Get attribute value
190+
^^^^^^^^^^^^^^^^^^^
191+
For example, to fetch an attribute you would usually write just :
190192

191193
.. testsetup::
192194

@@ -205,23 +207,15 @@ and **not** :
205207

206208
.. doctest:: python
207209

208-
>>> # WRONG: this reads an NcAttribute, not its value
210+
>>> # WRONG: this get the NcAttribute object, not its value
209211
>>> unit = dataset.variables["x"].attributes["units"]
210212

211-
or:
212-
213-
.. doctest:: python
214-
215-
>>> # WRONG: this gets NcAttribute.value as a character array, not a string
213+
>>> # WRONG: this returns a character array, not a string
216214
>>> unit = dataset.variables["x"].attributes["units"].value
217215

218-
or even (which is at least correct):
219-
220-
.. doctest:: python
221-
222-
>>> unit = dataset.variables["x"].attributes["units"].as_python_value()
223-
224216

217+
Set attribute value
218+
^^^^^^^^^^^^^^^^^^^
225219
Likewise, to **set** a value, you would normally just
226220

227221
.. doctest:: python
@@ -236,9 +230,11 @@ and **not**
236230
>>> dataset.variables["x"].attributes["units"].value = "K"
237231

238232

239-
Note also, that as the ``.avals`` is a dictionary, you can use standard dictionary
240-
methods such as ``update`` and ``get`` to perform other operations in a relatively
241-
natural, Pythonic way.
233+
``.avals`` as a dictionary
234+
^^^^^^^^^^^^^^^^^^^^^^^^^^
235+
Note also, that as ``.avals`` is a dictionary, you can use standard dictionary
236+
methods such as ``pop``, ``update`` and ``get`` to perform other operations in a
237+
relatively natural, Pythonic way.
242238

243239
.. doctest:: python
244240

@@ -247,6 +243,12 @@ natural, Pythonic way.
247243

248244
>>> dataset.attributes.update({"experiment": "A407", "expt_run": 704})
249245

246+
.. note::
247+
The new ``.avals`` property effectively replaces the old
248+
:meth:`~ncdata.NcData.get_attrval` and :meth:`~ncdata.NcData.set_attrval` methods,
249+
which are now deprecated and will eventually be removed.
250+
251+
250252
.. _data-constructors:
251253

252254
Core Object Constructors

docs/userdocs/user_guide/howtos.rst

Lines changed: 36 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -377,8 +377,8 @@ See: :ref:`copy_notes`
377377

378378
Extract a subsection by indexing
379379
--------------------------------
380-
The nicest way is usually just to use the :meth:`~ncdata.Ncdata.slicer` method to specify
381-
dimensions to index, and then index the result.
380+
The nicest way is usually to use the NcData :meth:`~ncdata.Ncdata.slicer` method to
381+
specify dimensions to index, and then index the result.
382382

383383
.. testsetup::
384384

@@ -388,22 +388,22 @@ dimensions to index, and then index the result.
388388
>>> for nn, dim in full_data.dimensions.items():
389389
... full_data.variables.add(NcVariable(nn, dimensions=[nn], data=np.arange(dim.size)))
390390

391-
.. doctest::
392-
393-
>>> for dimname in full_data.dimensions:
394-
... print(dimname, ':', full_data.variables[dimname].data)
395-
x : [0 1 2 3 4 5 6]
396-
y : [0 1 2 3 4 5]
397-
398391
.. doctest::
399392

400393
>>> data_region = full_data.slicer("y", "x")[3, 1::2]
401394

395+
effect:
396+
402397
.. doctest::
403398

399+
>>> for dimname in full_data.dimensions:
400+
... print("(original)", dimname, ':', full_data.variables[dimname].data)
401+
(original) x : [0 1 2 3 4 5 6]
402+
(original) y : [0 1 2 3 4 5]
403+
404404
>>> for dimname in data_region.dimensions:
405-
... print(dimname, ':', data_region.variables[dimname].data)
406-
x : [1 3 5]
405+
... print("(new)", dimname, ':', data_region.variables[dimname].data)
406+
(new) x : [1 3 5]
407407

408408
You can also slice data directly, which simply acts on the dimensions in order:
409409

@@ -413,7 +413,7 @@ You can also slice data directly, which simply acts on the dimensions in order:
413413
>>> data_region_2 == data_region
414414
True
415415

416-
See: :ref:`indexing_overview`
416+
See: :ref:`utils_indexing`
417417

418418

419419
Read data from a NetCDF file
@@ -454,8 +454,8 @@ Use the ``dim_chunks`` argument in the :func:`ncdata.netcdf4.from_nc4` function
454454

455455
>>> from ncdata.netcdf4 import from_nc4
456456
>>> ds = from_nc4(filepath, dim_chunks={"time": 3})
457-
>>> print(ds.variables["time"].data.chunksize)
458-
(3,)
457+
>>> print(ds.variables["time"].data.chunks)
458+
((3, 3, 3, 1),)
459459

460460

461461
Save data to a new file
@@ -531,8 +531,28 @@ Use :func:`ncdata.xarray.to_xarray` and :func:`ncdata.xarray.from_xarray`.
531531
>>> from ncdata.xarray import from_xarray, to_xarray
532532
>>> dataset = xarray.open_dataset(filepath)
533533
>>> ncdata = from_xarray(dataset)
534-
>>>
534+
535+
>>> print(ncdata)
536+
<NcData: <'no-name'>
537+
variables:
538+
<NcVariable(float64): vx()
539+
vx:units = 'm.s-1'
540+
vx:q = 4.2
541+
vx:_FillValue = nan
542+
>
543+
<BLANKLINE>
544+
global attributes:
545+
:experiment = 'A301.7'
546+
>
547+
535548
>>> ds2 = to_xarray(ncdata)
549+
>>> print(ds2)
550+
<xarray.Dataset> Size: 8B
551+
Dimensions: ()
552+
Data variables:
553+
vx float64 8B nan
554+
Attributes:
555+
experiment: A301.7
536556

537557
Note that:
538558

@@ -573,7 +593,7 @@ passed using specific dictionary keywords, e.g.
573593
... iris_load_kwargs={'constraints': 'air_temperature'},
574594
... xr_save_kwargs={'unlimited_dims': ('time',)},
575595
... )
576-
...
596+
577597

578598
Combine data from different input files into one output
579599
-------------------------------------------------------

0 commit comments

Comments
 (0)