Skip to content
Closed
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 13 additions & 9 deletions xarray/backends/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -1437,7 +1437,7 @@ def open_mfdataset(
Whether ``xarray.combine_by_coords`` or ``xarray.combine_nested`` is used to
combine all the data. Default is to use ``xarray.combine_by_coords``.
compat : {"identical", "equals", "broadcast_equals", \
"no_conflicts", "override"}, default: "no_conflicts"
"no_conflicts", "override"}, default: "override"
String indicating how to compare variables of the same name for
potential conflicts when merging:

Expand All @@ -1449,7 +1449,7 @@ def open_mfdataset(
* "no_conflicts": only values which are not null in both datasets
must be equal. The returned dataset then contains the combination
of all non-null values.
* "override": skip comparing and pick variable from first dataset
* "override" (default): skip comparing and pick variable from first dataset

preprocess : callable, optional
If provided, call this function on each dataset prior to concatenation.
Expand All @@ -1463,7 +1463,8 @@ def open_mfdataset(
"netcdf4" over "h5netcdf" over "scipy" (customizable via
``netcdf_engine_order`` in ``xarray.set_options()``). A custom backend
class (a subclass of ``BackendEntrypoint``) can also be used.
data_vars : {"minimal", "different", "all"} or list of str, default: "all"
data_vars : {"minimal", "different", "all", None} or list of str, \
default: None
These data variables will be concatenated together:
* "minimal": Only data variables in which the dimension already
appears are included.
Expand All @@ -1473,12 +1474,15 @@ class (a subclass of ``BackendEntrypoint``) can also be used.
load the data payload of data variables into memory if they are not
already loaded.
* "all": All data variables will be concatenated.
* None (default): Means ``"all"`` if ``concat_dim`` is not present
in any of the ``objs``, and ``"minimal"`` if ``concat_dim`` is
present in any of ``objs``.
* list of str: The listed data variables will be concatenated, in
addition to the "minimal" data variables.
coords : {"minimal", "different", "all"} or list of str, optional
coords : {"minimal", "different", "all"} or list of str, default: "minimal"
These coordinate variables will be concatenated together:
* "minimal": Only coordinates in which the dimension already appears
are included.
* "minimal" (default): Only coordinates in which the dimension already
appears are included.
* "different": Coordinates which are not equal (ignoring attributes)
across all datasets are also concatenated (as well as all for which
dimension already appears). Beware: this option may load the data
Expand All @@ -1491,16 +1495,16 @@ class (a subclass of ``BackendEntrypoint``) can also be used.
parallel : bool, default: False
If True, the open and preprocess steps of this function will be
performed in parallel using ``dask.delayed``. Default is False.
join : {"outer", "inner", "left", "right", "exact", "override"}, default: "outer"
join : {"outer", "inner", "left", "right", "exact", "override"}, default: "exact"
String indicating how to combine differing indexes
(excluding concat_dim) in objects

- "outer": use the union of object indexes
- "inner": use the intersection of object indexes
- "left": use indexes from the first object with each dimension
- "right": use indexes from the last object with each dimension
- "exact": instead of aligning, raise `ValueError` when indexes to be
aligned are not equal
- "exact" (default): instead of aligning, raise `ValueError` when
indexes to be aligned are not equal
- "override": if indexes are of same size, rewrite indexes to be
those of the first object with that dimension. Indexes for the same
dimension must have the same size in all objects.
Expand Down
8 changes: 4 additions & 4 deletions xarray/core/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -5659,7 +5659,7 @@ def merge(
If provided, update variables of these name(s) without checking for
conflicts in this dataset.
compat : {"identical", "equals", "broadcast_equals", \
"no_conflicts", "override", "minimal"}, default: "no_conflicts"
"no_conflicts", "override", "minimal"}, default: "override"
String indicating how to compare variables of the same name for
potential conflicts:

Expand All @@ -5671,18 +5671,18 @@ def merge(
- 'no_conflicts': only values which are not null in both datasets
must be equal. The returned dataset then contains the combination
of all non-null values.
- 'override': skip comparing and pick variable from first dataset
- 'override' (default): skip comparing and pick variable from first dataset
- 'minimal': drop conflicting coordinates

join : {"outer", "inner", "left", "right", "exact", "override"}, \
default: "outer"
default: "exact"
Method for joining ``self`` and ``other`` along shared dimensions:

- 'outer': use the union of the indexes
- 'inner': use the intersection of the indexes
- 'left': use indexes from ``self``
- 'right': use indexes from ``other``
- 'exact': error instead of aligning non-equal indexes
- 'exact' (default): error instead of aligning non-equal indexes
- 'override': use indexes from ``self`` that are the same size
as those of ``other`` in that dimension

Expand Down
4 changes: 2 additions & 2 deletions xarray/core/options.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ class T_Options(TypedDict):
"warn_for_unclosed_files": False,
"use_bottleneck": True,
"use_flox": True,
"use_new_combine_kwarg_defaults": False,
"use_new_combine_kwarg_defaults": True,
"use_numbagg": True,
"use_opt_einsum": True,
}
Expand Down Expand Up @@ -271,7 +271,7 @@ class set_options:
use_flox : bool, default: True
Whether to use ``numpy_groupies`` and `flox`` to
accelerate groupby and resampling reductions.
use_new_combine_kwarg_defaults : bool, default False
use_new_combine_kwarg_defaults : bool, default True
Whether to use new kwarg default values for combine functions:
:py:func:`~xarray.concat`, :py:func:`~xarray.merge`,
:py:func:`~xarray.open_mfdataset`. New values are:
Expand Down
58 changes: 36 additions & 22 deletions xarray/structure/combine.py
Original file line number Diff line number Diff line change
Expand Up @@ -509,7 +509,7 @@ def combine_nested(
Must be the same length as the depth of the list passed to
``datasets``.
compat : {"identical", "equals", "broadcast_equals", \
"no_conflicts", "override"}, optional
"no_conflicts", "override"}, default: "override"
String indicating how to compare variables of the same name for
potential merge conflicts:

Expand All @@ -521,8 +521,8 @@ def combine_nested(
- "no_conflicts": only values which are not null in both datasets
must be equal. The returned dataset then contains the combination
of all non-null values.
- "override": skip comparing and pick variable from first dataset
data_vars : {"minimal", "different", "all" or list of str}, optional
- "override" (default): skip comparing and pick variable from first dataset
data_vars : {"minimal", "different", "all", None} or list of str, default: None
These data variables will be concatenated together:
* "minimal": Only data variables in which the dimension already
appears are included.
Expand All @@ -532,15 +532,16 @@ def combine_nested(
load the data payload of data variables into memory if they are not
already loaded.
* "all": All data variables will be concatenated.
* None: Means ``"all"`` if ``dim`` is not present in any of the ``objs``,
and ``"minimal"`` if ``dim`` is present in any of ``objs``.
* None (default): Means ``"all"`` if ``concat_dim`` is not present in
any of the ``objs``,and ``"minimal"`` if ``concat_dim`` is present
in any of ``objs``.
* list of dims: The listed data variables will be concatenated, in
addition to the "minimal" data variables.

coords : {"minimal", "different", "all" or list of str}, optional
coords : {"minimal", "different", "all"} or list of str, default: "minimal"
These coordinate variables will be concatenated together:
* "minimal": Only coordinates in which the dimension already appears
are included. If concatenating over a dimension _not_
* "minimal" (default): Only coordinates in which the dimension already
appears are included. If concatenating over a dimension _not_
present in any of the objects, then all data variables will
be concatenated along that new dimension.
* "different": Coordinates which are not equal (ignoring attributes)
Expand All @@ -557,16 +558,16 @@ def combine_nested(
Value to use for newly missing values. If a dict-like, maps
variable names to fill values. Use a data array's name to
refer to its values.
join : {"outer", "inner", "left", "right", "exact"}, optional
join : {"outer", "inner", "left", "right", "exact"}, default: "exact"
String indicating how to combine differing indexes
(excluding concat_dim) in objects

- "outer": use the union of object indexes
- "inner": use the intersection of object indexes
- "left": use indexes from the first object with each dimension
- "right": use indexes from the last object with each dimension
- "exact": instead of aligning, raise `ValueError` when indexes to be
aligned are not equal
- "exact" (default): instead of aligning, raise `ValueError` when
indexes to be aligned are not equal
- "override": if indexes are of same size, rewrite indexes to be
those of the first object with that dimension. Indexes for the same
dimension must have the same size in all objects.
Expand Down Expand Up @@ -836,7 +837,8 @@ def combine_by_coords(
data_objects : Iterable of Datasets or DataArrays
Data objects to combine.

compat : {"identical", "equals", "broadcast_equals", "no_conflicts", "override"}, optional
compat : {"identical", "equals", "broadcast_equals", "no_conflicts", "override"}, \
default: "override"
String indicating how to compare variables of the same name for
potential conflicts:

Expand All @@ -848,11 +850,10 @@ def combine_by_coords(
- "no_conflicts": only values which are not null in both datasets
must be equal. The returned dataset then contains the combination
of all non-null values.
- "override": skip comparing and pick variable from first dataset
- "override" (default): skip comparing and pick variable from first dataset

data_vars : {"minimal", "different", "all" or list of str}, optional
data_vars : {"minimal", "different", "all", None} or list of str, default: None
These data variables will be concatenated together:

- "minimal": Only data variables in which the dimension already
appears are included.
- "different": Data variables which are not equal (ignoring
Expand All @@ -861,26 +862,39 @@ def combine_by_coords(
load the data payload of data variables into memory if they are not
already loaded.
- "all": All data variables will be concatenated.
- None (default): Means ``"all"`` if ``concat_dim`` is not present in any of the
``objs``, and ``"minimal"`` if ``concat_dim`` is present in any of ``objs``.
- list of str: The listed data variables will be concatenated, in
addition to the "minimal" data variables.

If objects are DataArrays, `data_vars` must be "all".
coords : {"minimal", "different", "all"} or list of str, optional
As per the "data_vars" kwarg, but for coordinate variables.
coords : {"minimal", "different", "all"} or list of str, default: "minimal"
These coordinate variables will be concatenated together:
- "minimal" (default): Only coordinates in which the dimension already
appears are included. If concatenating over a dimension _not_
present in any of the objects, then all data variables will
be concatenated along that new dimension.
- "different": Coordinates which are not equal (ignoring attributes)
across all datasets are also concatenated (as well as all for which
dimension already appears). Beware: this option may load the data
payload of coordinate variables into memory if they are not already
loaded.
- "all": All coordinate variables will be concatenated, except
those corresponding to other dimensions.
- list of Hashable: The listed coordinate variables will be concatenated,
in addition to the "minimal" coordinates.
fill_value : scalar or dict-like, optional
Value to use for newly missing values. If a dict-like, maps
variable names to fill values. Use a data array's name to
refer to its values. If None, raises a ValueError if
the passed Datasets do not create a complete hypercube.
join : {"outer", "inner", "left", "right", "exact"}, optional
join : {"outer", "inner", "left", "right", "exact"}, default: "exact"
String indicating how to combine differing indexes in objects

- "outer": use the union of object indexes
- "inner": use the intersection of object indexes
- "left": use indexes from the first object with each dimension
- "right": use indexes from the last object with each dimension
- "exact": instead of aligning, raise `ValueError` when indexes to be
aligned are not equal
- "exact" (default): instead of aligning, raise `ValueError` when
indexes to be aligned are not equal
- "override": if indexes are of same size, rewrite indexes to be
those of the first object with that dimension. Indexes for the same
dimension must have the same size in all objects.
Expand Down
23 changes: 12 additions & 11 deletions xarray/structure/concat.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@ def concat(
unchanged. If dimension is provided as a Variable, DataArray or Index, its name
is used as the dimension to concatenate along and the values are added
as a coordinate.
data_vars : {"minimal", "different", "all", None} or list of Hashable, optional
data_vars : {"minimal", "different", "all", None} or list of Hashable, default: None
These data variables will be concatenated together:
* "minimal": Only data variables in which the dimension already
appears are included.
Expand All @@ -124,15 +124,15 @@ def concat(
load the data payload of data variables into memory if they are not
already loaded.
* "all": All data variables will be concatenated.
* None: Means ``"all"`` if ``dim`` is not present in any of the ``objs``,
and ``"minimal"`` if ``dim`` is present in any of ``objs``.
* None (default): Means ``"all"`` if ``dim`` is not present in any of the
``objs``, and ``"minimal"`` if ``dim`` is present in any of ``objs``.
* list of dims: The listed data variables will be concatenated, in
addition to the "minimal" data variables.

If objects are DataArrays, data_vars must be "all".
coords : {"minimal", "different", "all"} or list of Hashable, optional
If objects are DataArrays, data_vars must be "all" or None.
coords : {"minimal", "different", "all"} or list of Hashable, default: "minimal"
These coordinate variables will be concatenated together:
* "minimal": Only coordinates in which the dimension already appears
* "minimal" (default): Only coordinates in which the dimension already appears
are included.
* "different": Coordinates which are not equal (ignoring attributes)
across all datasets are also concatenated (as well as all for which
Expand All @@ -143,7 +143,8 @@ def concat(
those corresponding to other dimensions.
* list of Hashable: The listed coordinate variables will be concatenated,
in addition to the "minimal" coordinates.
compat : {"identical", "equals", "broadcast_equals", "no_conflicts", "override"}, optional
compat : {"identical", "equals", "broadcast_equals", "no_conflicts", "override"}, \
default: "override"
String indicating how to compare non-concatenated variables of the same name for
potential conflicts. This is passed down to merge.

Expand All @@ -155,7 +156,7 @@ def concat(
- "no_conflicts": only values which are not null in both datasets
must be equal. The returned dataset then contains the combination
of all non-null values.
- "override": skip comparing and pick variable from first dataset
- "override" (default): skip comparing and pick variable from first dataset
positions : None or list of integer arrays, optional
List of integer arrays which specifies the integer positions to which
to assign each dataset along the concatenated dimension. If not
Expand All @@ -164,16 +165,16 @@ def concat(
Value to use for newly missing values. If a dict-like, maps
variable names to fill values. Use a data array's name to
refer to its values.
join : {"outer", "inner", "left", "right", "exact"}, optional
join : {"outer", "inner", "left", "right", "exact"}, default: "exact"
String indicating how to combine differing indexes
(excluding dim) in objects

- "outer": use the union of object indexes
- "inner": use the intersection of object indexes
- "left": use indexes from the first object with each dimension
- "right": use indexes from the last object with each dimension
- "exact": instead of aligning, raise `ValueError` when indexes to be
aligned are not equal
- "exact" (default): instead of aligning, raise `ValueError` when indexes
to be aligned are not equal
- "override": if indexes are of same size, rewrite indexes to be
those of the first object with that dimension. Indexes for the same
dimension must have the same size in all objects.
Expand Down
Loading
Loading