@@ -43,7 +43,6 @@ new dimension by stacking lower dimensional arrays together:
43
43
44
44
.. ipython :: python
45
45
46
- da.sel(x = " a" )
47
46
xr.concat([da.isel(x = 0 ), da.isel(x = 1 )], " x" )
48
47
49
48
If the second argument to ``concat `` is a new dimension name, the arrays will
@@ -52,15 +51,18 @@ dimension:
52
51
53
52
.. ipython :: python
54
53
55
- xr.concat([da.isel(x = 0 ), da.isel(x = 1 )], " new_dim" )
54
+ da0 = da.isel(x = 0 ).drop_vars(" x" )
55
+ da1 = da.isel(x = 1 ).drop_vars(" x" )
56
+
57
+ xr.concat([da0, da1], " new_dim" )
56
58
57
59
The second argument to ``concat `` can also be an :py:class: `~pandas.Index ` or
58
60
:py:class: `~xarray.DataArray ` object as well as a string, in which case it is
59
61
used to label the values along the new dimension:
60
62
61
63
.. ipython :: python
62
64
63
- xr.concat([da.isel( x = 0 ), da.isel( x = 1 ) ], pd.Index([- 90 , - 100 ], name = " new_dim" ))
65
+ xr.concat([da0, da1 ], pd.Index([- 90 , - 100 ], name = " new_dim" ))
64
66
65
67
Of course, ``concat `` also works on ``Dataset `` objects:
66
68
@@ -75,6 +77,12 @@ between datasets. With the default parameters, xarray will load some coordinate
75
77
variables into memory to compare them between datasets. This may be prohibitively
76
78
expensive if you are manipulating your dataset lazily using :ref: `dask `.
77
79
80
+ .. note ::
81
+
82
+ The default values for many of these options will be changing in a future
83
+ version of xarray. You can opt into the new default values early using
84
+ ``xr.set_options(use_new_combine_kwarg_defaults=True) ``.
85
+
78
86
.. _merge :
79
87
80
88
Merge
@@ -94,10 +102,18 @@ If you merge another dataset (or a dictionary including data array objects), by
94
102
default the resulting dataset will be aligned on the **union ** of all index
95
103
coordinates:
96
104
105
+ .. note ::
106
+
107
+ The default value for ``join `` and ``compat `` will be changing in a future
108
+ version of xarray. This change will mean that the resulting dataset will be
109
+ not be aligned. You can opt into the new default values early using
110
+ ``xr.set_options(use_new_combine_kwarg_defaults=True) ``. Or explicitly set
111
+ ``join='outer' `` to preserve old behavior.
112
+
97
113
.. ipython :: python
98
114
99
115
other = xr.Dataset({" bar" : (" x" , [1 , 2 , 3 , 4 ]), " x" : list (" abcd" )})
100
- xr.merge([ds, other])
116
+ xr.merge([ds, other], join = " outer " )
101
117
102
118
This ensures that ``merge `` is non-destructive. ``xarray.MergeError `` is raised
103
119
if you attempt to merge two variables with the same name but different values:
@@ -114,6 +130,16 @@ if you attempt to merge two variables with the same name but different values:
114
130
array([[ 1.4691123 , 0.71713666, -0.5090585 ],
115
131
[-0.13563237, 2.21211203, 0.82678535]])
116
132
133
+ .. note ::
134
+
135
+ In the future the default value for ``compat `` will change from
136
+ ``compat='no_conflicts' `` to ``compat='override' ``. In this scenario the
137
+ values in the first object override all the values in other objects.
138
+
139
+ .. ipython :: python
140
+
141
+ xr.merge([ds, ds + 1 ], compat = " override" )
142
+
117
143
The same non-destructive merging between ``DataArray `` index coordinates is
118
144
used in the :py:class: `~xarray.Dataset ` constructor:
119
145
@@ -144,6 +170,11 @@ For datasets, ``ds0.combine_first(ds1)`` works similarly to
144
170
there are conflicting values in variables to be merged, whereas
145
171
``.combine_first `` defaults to the calling object's values.
146
172
173
+ .. note ::
174
+
175
+ In a future version of xarray the default options for ``xr.merge `` will change
176
+ such that the behavior matches ``combine_first ``.
177
+
147
178
.. _update :
148
179
149
180
Update
@@ -236,7 +267,7 @@ coordinates as long as any non-missing values agree or are disjoint:
236
267
237
268
ds1 = xr.Dataset({" a" : (" x" , [10 , 20 , 30 , np.nan])}, {" x" : [1 , 2 , 3 , 4 ]})
238
269
ds2 = xr.Dataset({" a" : (" x" , [np.nan, 30 , 40 , 50 ])}, {" x" : [2 , 3 , 4 , 5 ]})
239
- xr.merge([ds1, ds2], compat = " no_conflicts" )
270
+ xr.merge([ds1, ds2], join = " outer " , compat = " no_conflicts" )
240
271
241
272
Note that due to the underlying representation of missing values as floating
242
273
point numbers (``NaN ``), variable data type is not always preserved when merging
@@ -295,13 +326,12 @@ they are concatenated in order based on the values in their dimension
295
326
coordinates, not on their position in the list passed to ``combine_by_coords ``.
296
327
297
328
.. ipython :: python
298
- :okwarning:
299
329
300
330
x1 = xr.DataArray(name = " foo" , data = np.random.randn(3 ), coords = [(" x" , [0 , 1 , 2 ])])
301
331
x2 = xr.DataArray(name = " foo" , data = np.random.randn(3 ), coords = [(" x" , [3 , 4 , 5 ])])
302
332
xr.combine_by_coords([x2, x1])
303
333
304
- These functions can be used by :py:func: `~xarray.open_mfdataset ` to open many
334
+ These functions are used by :py:func: `~xarray.open_mfdataset ` to open many
305
335
files as one dataset. The particular function used is specified by setting the
306
336
argument ``'combine' `` to ``'by_coords' `` or ``'nested' ``. This is useful for
307
337
situations where your data is split across many files in multiple locations,
0 commit comments