Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,8 @@
):
exclude_patterns.append(rel_fname)
elif single_doc and rel_fname != pattern:
if "\\" in rel_fname:
rel_fname = rel_fname.replace("\\", "/")
exclude_patterns.append(rel_fname)

with open(os.path.join(source_path, "index.rst.template"), encoding="utf-8") as f:
Expand Down
12 changes: 6 additions & 6 deletions doc/source/user_guide/enhancingperf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ We have a :class:`DataFrame` to which we want to apply a function row-wise.
{
"a": np.random.randn(1000),
"b": np.random.randn(1000),
"N": np.random.randint(100, 1000, (1000)),
"N": np.random.randint(100, 1000, (1000), dtype="int64"),
"x": "x",
}
)
Expand Down Expand Up @@ -83,7 +83,7 @@ using the `prun ipython magic function <https://ipython.readthedocs.io/en/stable
.. ipython:: python

# most time consuming 4 calls
%prun -l 4 df.apply(lambda x: integrate_f(x["a"], x["b"], x["N"]), axis=1) # noqa E999
%prun -l 4 df.apply(lambda x: integrate_f(x['a'], x['b'], x['N']), axis=1)

By far the majority of time is spend inside either ``integrate_f`` or ``f``,
hence we'll concentrate our efforts cythonizing these two functions.
Expand Down Expand Up @@ -164,7 +164,7 @@ can be improved by passing an ``np.ndarray``.

.. ipython:: python

%prun -l 4 df.apply(lambda x: integrate_f_typed(x["a"], x["b"], x["N"]), axis=1)
%prun -l 4 df.apply(lambda x: integrate_f_typed(x['a'], x['b'], x['N']), axis=1)

.. ipython::

Expand Down Expand Up @@ -204,7 +204,7 @@ calls are needed to utilize this function.

.. ipython:: python

%timeit apply_integrate_f(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
%timeit apply_integrate_f(df['a'].to_numpy(), df['b'].to_numpy(), df['N'].to_numpy())

Performance has improved from the prior implementation by almost ten times.

Expand All @@ -218,7 +218,7 @@ and ``wraparound`` checks can yield more performance.

.. ipython:: python

%prun -l 4 apply_integrate_f(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
%prun -l 4 apply_integrate_f(df['a'].to_numpy(), df['b'].to_numpy(), df['N'].to_numpy())

.. ipython::

Expand Down Expand Up @@ -253,7 +253,7 @@ and ``wraparound`` checks can yield more performance.

.. ipython:: python

%timeit apply_integrate_f_wrap(df["a"].to_numpy(), df["b"].to_numpy(), df["N"].to_numpy())
%timeit apply_integrate_f_wrap(df['a'].to_numpy(), df['b'].to_numpy(), df['N'].to_numpy())

However, a loop indexer ``i`` accessing an invalid location in an array would cause a segfault because memory access isn't checked.
For more about ``boundscheck`` and ``wraparound``, see the Cython docs on
Expand Down