Skip to content

Commit a7d5163

Browse files
committed
Fix doctest issues
1 parent f923b24 commit a7d5163

File tree

3 files changed

+16
-16
lines changed

3 files changed

+16
-16
lines changed

advanced/mathematical_optimization/index.rst

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -613,16 +613,16 @@ are also supported by L-BFGS-B::
613613
>>> def jacobian(x):
614614
... return np.array((-2*.5*(1 - x[0]) - 4*x[0]*(x[1] - x[0]**2), 2*(x[1] - x[0]**2)))
615615
>>> sp.optimize.minimize(f, [2, 2], method="L-BFGS-B", jac=jacobian)
616-
message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
617-
success: True
618-
status: 0
619-
fun: 1.4417677473...e-15
620-
x: [ 1.000e+00 1.000e+00]
621-
nit: 16
622-
jac: [ 1.023e-07 -2.593e-08]
623-
nfev: 17
624-
njev: 17
625-
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
616+
message: CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL
617+
success: True
618+
status: 0
619+
fun: 1.4417677473...e-15
620+
x: [ 1.000e+00 1.000e+00]
621+
nit: 16
622+
jac: [ 1.023e-07 -2.593e-08]
623+
nfev: 17
624+
njev: 17
625+
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
626626

627627
Gradient-less methods
628628
----------------------
@@ -886,8 +886,8 @@ Lets try to minimize the norm of the following vectorial function::
886886

887887
>>> x0 = np.zeros(10)
888888
>>> sp.optimize.leastsq(f, x0)
889-
(array([0. , 0.11111111, 0.22222222, 0.33333333, 0.44444444,
890-
0.55555556, 0.66666667, 0.77777778, 0.88888889, 1. ]), 2)
889+
(array([0. , 0.11111111, 0.22222222, 0.33333333, 0.44444444,
890+
0.55555556, 0.66666667, 0.77777778, 0.88888889, 1. ]), 2)
891891

892892
This took 67 function evaluations (check it with 'full_output=1'). What
893893
if we compute the norm ourselves and use a good generic optimizer
@@ -958,7 +958,7 @@ support bound constraints with the parameter ``bounds``::
958958
>>> def f(x):
959959
... return np.sqrt((x[0] - 3)**2 + (x[1] - 2)**2)
960960
>>> sp.optimize.minimize(f, np.array([0, 0]), bounds=((-1.5, 1.5), (-1.5, 1.5)))
961-
message: CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL
961+
message: CONVERGENCE: NORM OF PROJECTED GRADIENT <=_PGTOL
962962
success: True
963963
status: 0
964964
fun: 1.5811388300841898
@@ -967,7 +967,7 @@ support bound constraints with the parameter ``bounds``::
967967
jac: [-9.487e-01 -3.162e-01]
968968
nfev: 9
969969
njev: 3
970-
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
970+
hess_inv: <2x2 LbfgsInvHessProduct with dtype=float64>
971971

972972
.. image:: auto_examples/images/sphx_glr_plot_constraints_002.png
973973
:target: auto_examples/plot_constraints.html

advanced/scipy_sparse/solvers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ LinearOperator Class
150150
...
151151
>>> A = sp.sparse.linalg.LinearOperator((2, 2), matvec=mv)
152152
>>> A
153-
<2x2 _CustomLinearOperator with dtype=float64>
153+
<2x2 _CustomLinearOperator with dtype=int8>
154154
>>> A.matvec(np.ones(2))
155155
array([2., 3.])
156156
>>> A * np.ones(2)

packages/statistics/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -365,7 +365,7 @@ will affect the conclusions of the test, we can use a `Wilcoxon signed-rank test
365365
this assumption at the expense of test power::
366366

367367
>>> sp.stats.wilcoxon(data['VIQ'])
368-
WilcoxonResult(statistic=np.float64(0.0), pvalue=np.float64(1.8189894...e-12))
368+
WilcoxonResult(statistic=np.float64(0.0), pvalue=np.float64(3.4881726...e-08))
369369

370370
Two-sample t-test: testing for difference across populations
371371
............................................................

0 commit comments

Comments
 (0)