Skip to content

Commit 1b02145

Browse files
Merge branch 'main' into periodindex-to_datetime_inconsistent_with_its_docstring
2 parents a43ade7 + b1c2ba7 commit 1b02145

File tree

30 files changed

+360
-56
lines changed

30 files changed

+360
-56
lines changed

.github/workflows/wheels.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ jobs:
152152
run: echo "sdist_name=$(cd ./dist && ls -d */)" >> "$GITHUB_ENV"
153153

154154
- name: Build wheels
155-
uses: pypa/cibuildwheel@v2.21.3
155+
uses: pypa/cibuildwheel@v2.22.0
156156
with:
157157
package-dir: ./dist/${{ startsWith(matrix.buildplat[1], 'macosx') && env.sdist_name || needs.build_sdist.outputs.sdist_file }}
158158
env:

ci/code_checks.sh

Lines changed: 0 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
7373
-i "pandas.Period.freq GL08" \
7474
-i "pandas.Period.ordinal GL08" \
7575
-i "pandas.RangeIndex.from_range PR01,SA01" \
76-
-i "pandas.Series.dt.freq GL08" \
7776
-i "pandas.Series.dt.unit GL08" \
7877
-i "pandas.Series.pad PR01,SA01" \
7978
-i "pandas.Timedelta.max PR02" \
@@ -92,15 +91,11 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
9291
-i "pandas.core.groupby.DataFrameGroupBy.boxplot PR07,RT03,SA01" \
9392
-i "pandas.core.groupby.DataFrameGroupBy.get_group RT03,SA01" \
9493
-i "pandas.core.groupby.DataFrameGroupBy.indices SA01" \
95-
-i "pandas.core.groupby.DataFrameGroupBy.nth PR02" \
9694
-i "pandas.core.groupby.DataFrameGroupBy.nunique SA01" \
9795
-i "pandas.core.groupby.DataFrameGroupBy.plot PR02" \
9896
-i "pandas.core.groupby.DataFrameGroupBy.sem SA01" \
9997
-i "pandas.core.groupby.SeriesGroupBy.get_group RT03,SA01" \
10098
-i "pandas.core.groupby.SeriesGroupBy.indices SA01" \
101-
-i "pandas.core.groupby.SeriesGroupBy.is_monotonic_decreasing SA01" \
102-
-i "pandas.core.groupby.SeriesGroupBy.is_monotonic_increasing SA01" \
103-
-i "pandas.core.groupby.SeriesGroupBy.nth PR02" \
10499
-i "pandas.core.groupby.SeriesGroupBy.plot PR02" \
105100
-i "pandas.core.groupby.SeriesGroupBy.sem SA01" \
106101
-i "pandas.core.resample.Resampler.get_group RT03,SA01" \
@@ -114,19 +109,11 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
114109
-i "pandas.core.resample.Resampler.std SA01" \
115110
-i "pandas.core.resample.Resampler.transform PR01,RT03,SA01" \
116111
-i "pandas.core.resample.Resampler.var SA01" \
117-
-i "pandas.errors.AttributeConflictWarning SA01" \
118-
-i "pandas.errors.ChainedAssignmentError SA01" \
119-
-i "pandas.errors.DuplicateLabelError SA01" \
120112
-i "pandas.errors.IntCastingNaNError SA01" \
121-
-i "pandas.errors.InvalidIndexError SA01" \
122113
-i "pandas.errors.NullFrequencyError SA01" \
123-
-i "pandas.errors.NumExprClobberingError SA01" \
124114
-i "pandas.errors.NumbaUtilError SA01" \
125-
-i "pandas.errors.OutOfBoundsTimedelta SA01" \
126115
-i "pandas.errors.PerformanceWarning SA01" \
127-
-i "pandas.errors.PossibleDataLossError SA01" \
128116
-i "pandas.errors.UndefinedVariableError PR01,SA01" \
129-
-i "pandas.errors.UnsortedIndexError SA01" \
130117
-i "pandas.errors.ValueLabelTypeMismatch SA01" \
131118
-i "pandas.infer_freq SA01" \
132119
-i "pandas.io.json.build_table_schema PR07,RT03,SA01" \

doc/source/user_guide/window.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -567,9 +567,9 @@ One must have :math:`0 < \alpha \leq 1`, and while it is possible to pass
567567
568568
\alpha =
569569
\begin{cases}
570-
\frac{2}{s + 1}, & \text{for span}\ s \geq 1\\
571-
\frac{1}{1 + c}, & \text{for center of mass}\ c \geq 0\\
572-
1 - \exp^{\frac{\log 0.5}{h}}, & \text{for half-life}\ h > 0
570+
\frac{2}{s + 1}, & \text{for span}\ s \geq 1\\
571+
\frac{1}{1 + c}, & \text{for center of mass}\ c \geq 0\\
572+
1 - e^{\frac{\log 0.5}{h}}, & \text{for half-life}\ h > 0
573573
\end{cases}
574574
575575
One must specify precisely one of **span**, **center of mass**, **half-life**

doc/source/whatsnew/v3.0.0.rst

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,7 @@ Other enhancements
5454
- :meth:`Series.cummin` and :meth:`Series.cummax` now supports :class:`CategoricalDtype` (:issue:`52335`)
5555
- :meth:`Series.plot` now correctly handle the ``ylabel`` parameter for pie charts, allowing for explicit control over the y-axis label (:issue:`58239`)
5656
- :meth:`DataFrame.plot.scatter` argument ``c`` now accepts a column of strings, where rows with the same string are colored identically (:issue:`16827` and :issue:`16485`)
57+
- :func:`read_parquet` accepts ``to_pandas_kwargs`` which are forwarded to :meth:`pyarrow.Table.to_pandas` which enables passing additional keywords to customize the conversion to pandas, such as ``maps_as_pydicts`` to read the Parquet map data type as python dictionaries (:issue:`56842`)
5758
- :meth:`DataFrameGroupBy.transform`, :meth:`SeriesGroupBy.transform`, :meth:`DataFrameGroupBy.agg`, :meth:`SeriesGroupBy.agg`, :meth:`RollingGroupby.apply`, :meth:`ExpandingGroupby.apply`, :meth:`Rolling.apply`, :meth:`Expanding.apply`, :meth:`DataFrame.apply` with ``engine="numba"`` now supports positional arguments passed as kwargs (:issue:`58995`)
5859
- :meth:`Series.map` can now accept kwargs to pass on to func (:issue:`59814`)
5960
- :meth:`pandas.concat` will raise a ``ValueError`` when ``ignore_index=True`` and ``keys`` is not ``None`` (:issue:`59274`)
@@ -626,6 +627,7 @@ Datetimelike
626627
- Bug in :meth:`Series.dt.microsecond` producing incorrect results for pyarrow backed :class:`Series`. (:issue:`59154`)
627628
- Bug in :meth:`to_datetime` not respecting dayfirst if an uncommon date string was passed. (:issue:`58859`)
628629
- Bug in :meth:`to_datetime` reports incorrect index in case of any failure scenario. (:issue:`58298`)
630+
- Bug in :meth:`to_datetime` wrongly converts when ``arg`` is a ``np.datetime64`` object with unit of ``ps``. (:issue:`60341`)
629631
- Bug in setting scalar values with mismatched resolution into arrays with non-nanosecond ``datetime64``, ``timedelta64`` or :class:`DatetimeTZDtype` incorrectly truncating those scalars (:issue:`56410`)
630632

631633
Timedelta
@@ -688,6 +690,7 @@ I/O
688690
- Bug in :meth:`DataFrame.from_records` where ``columns`` parameter with numpy structured array was not reordering and filtering out the columns (:issue:`59717`)
689691
- Bug in :meth:`DataFrame.to_dict` raises unnecessary ``UserWarning`` when columns are not unique and ``orient='tight'``. (:issue:`58281`)
690692
- Bug in :meth:`DataFrame.to_excel` when writing empty :class:`DataFrame` with :class:`MultiIndex` on both axes (:issue:`57696`)
693+
- Bug in :meth:`DataFrame.to_excel` where the :class:`MultiIndex` index with a period level was not a date (:issue:`60099`)
691694
- Bug in :meth:`DataFrame.to_stata` when writing :class:`DataFrame` and ``byteorder=`big```. (:issue:`58969`)
692695
- Bug in :meth:`DataFrame.to_stata` when writing more than 32,000 value labels. (:issue:`60107`)
693696
- Bug in :meth:`DataFrame.to_string` that raised ``StopIteration`` with nested DataFrames. (:issue:`16098`)
@@ -763,7 +766,7 @@ ExtensionArray
763766

764767
Styler
765768
^^^^^^
766-
-
769+
- Bug in :meth:`Styler.to_latex` where styling column headers when combined with a hidden index or hidden index-levels is fixed.
767770

768771
Other
769772
^^^^^
@@ -787,6 +790,7 @@ Other
787790
- Bug in :meth:`Series.dt` methods in :class:`ArrowDtype` that were returning incorrect values. (:issue:`57355`)
788791
- Bug in :meth:`Series.rank` that doesn't preserve missing values for nullable integers when ``na_option='keep'``. (:issue:`56976`)
789792
- Bug in :meth:`Series.replace` and :meth:`DataFrame.replace` inconsistently replacing matching instances when ``regex=True`` and missing values are present. (:issue:`56599`)
793+
- Bug in :meth:`Series.to_string` when series contains complex floats with exponents (:issue:`60405`)
790794
- Bug in :meth:`read_csv` where chained fsspec TAR file and ``compression="infer"`` fails with ``tarfile.ReadError`` (:issue:`60028`)
791795
- Bug in Dataframe Interchange Protocol implementation was returning incorrect results for data buffers' associated dtype, for string and datetime columns (:issue:`54781`)
792796
- Bug in ``Series.list`` methods not preserving the original :class:`Index`. (:issue:`58425`)

pandas/_libs/src/vendored/numpy/datetime/np_datetime.c

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -660,11 +660,12 @@ void pandas_datetime_to_datetimestruct(npy_datetime dt, NPY_DATETIMEUNIT base,
660660
perday = 24LL * 60 * 60 * 1000 * 1000 * 1000 * 1000;
661661

662662
set_datetimestruct_days(extract_unit(&dt, perday), out);
663-
out->hour = (npy_int32)extract_unit(&dt, 1000LL * 1000 * 1000 * 60 * 60);
664-
out->min = (npy_int32)extract_unit(&dt, 1000LL * 1000 * 1000 * 60);
665-
out->sec = (npy_int32)extract_unit(&dt, 1000LL * 1000 * 1000);
666-
out->us = (npy_int32)extract_unit(&dt, 1000LL);
667-
out->ps = (npy_int32)(dt * 1000);
663+
out->hour =
664+
(npy_int32)extract_unit(&dt, 1000LL * 1000 * 1000 * 1000 * 60 * 60);
665+
out->min = (npy_int32)extract_unit(&dt, 1000LL * 1000 * 1000 * 1000 * 60);
666+
out->sec = (npy_int32)extract_unit(&dt, 1000LL * 1000 * 1000 * 1000);
667+
out->us = (npy_int32)extract_unit(&dt, 1000LL * 1000);
668+
out->ps = (npy_int32)(dt);
668669
break;
669670

670671
case NPY_FR_fs:

pandas/_libs/tslibs/np_datetime.pyx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -201,6 +201,10 @@ class OutOfBoundsTimedelta(ValueError):
201201
202202
Representation should be within a timedelta64[ns].
203203
204+
See Also
205+
--------
206+
date_range : Return a fixed frequency DatetimeIndex.
207+
204208
Examples
205209
--------
206210
>>> pd.date_range(start="1/1/1700", freq="B", periods=100000)

pandas/core/arrays/interval.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1055,7 +1055,9 @@ def shift(self, periods: int = 1, fill_value: object = None) -> IntervalArray:
10551055
from pandas import Index
10561056

10571057
fill_value = Index(self._left, copy=False)._na_value
1058-
empty = IntervalArray.from_breaks([fill_value] * (empty_len + 1))
1058+
empty = IntervalArray.from_breaks(
1059+
[fill_value] * (empty_len + 1), closed=self.closed
1060+
)
10591061
else:
10601062
empty = self._from_sequence([fill_value] * empty_len, dtype=self.dtype)
10611063

pandas/core/frame.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4742,7 +4742,8 @@ def eval(self, expr: str, *, inplace: bool = False, **kwargs) -> Any | None:
47424742
3 4 4 7 8 0
47434743
4 5 2 6 7 3
47444744
4745-
For columns with spaces in their name, you can use backtick quoting.
4745+
For columns with spaces or other disallowed characters in their name, you can
4746+
use backtick quoting.
47464747
47474748
>>> df.eval("B * `C&C`")
47484749
0 100

pandas/core/groupby/generic.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1443,6 +1443,11 @@ def is_monotonic_increasing(self) -> Series:
14431443
-------
14441444
Series
14451445
1446+
See Also
1447+
--------
1448+
SeriesGroupBy.is_monotonic_decreasing : Return whether each group's values
1449+
are monotonically decreasing.
1450+
14461451
Examples
14471452
--------
14481453
>>> s = pd.Series([2, 1, 3, 4], index=["Falcon", "Falcon", "Parrot", "Parrot"])
@@ -1462,6 +1467,11 @@ def is_monotonic_decreasing(self) -> Series:
14621467
-------
14631468
Series
14641469
1470+
See Also
1471+
--------
1472+
SeriesGroupBy.is_monotonic_increasing : Return whether each group's values
1473+
are monotonically increasing.
1474+
14651475
Examples
14661476
--------
14671477
>>> s = pd.Series([2, 1, 3, 4], index=["Falcon", "Falcon", "Parrot", "Parrot"])

pandas/core/groupby/groupby.py

Lines changed: 0 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -3983,19 +3983,6 @@ def nth(self) -> GroupByNthSelector:
39833983
'all' or 'any'; this is equivalent to calling dropna(how=dropna)
39843984
before the groupby.
39853985
3986-
Parameters
3987-
----------
3988-
n : int, slice or list of ints and slices
3989-
A single nth value for the row or a list of nth values or slices.
3990-
3991-
.. versionchanged:: 1.4.0
3992-
Added slice and lists containing slices.
3993-
Added index notation.
3994-
3995-
dropna : {'any', 'all', None}, default None
3996-
Apply the specified dropna operation before counting which row is
3997-
the nth row. Only supported if n is an int.
3998-
39993986
Returns
40003987
-------
40013988
Series or DataFrame

0 commit comments

Comments
 (0)