@@ -666,7 +666,7 @@ dual coefficients :math:`\alpha_i` are zero for the other samples.
666
666
These parameters can be accessed through the attributes ``dual_coef_ ``
667
667
which holds the product :math: `y_i \alpha _i`, ``support_vectors_ `` which
668
668
holds the support vectors, and ``intercept_ `` which holds the independent
669
- term :math: `b`
669
+ term :math: `b`.
670
670
671
671
.. note ::
672
672
@@ -675,7 +675,7 @@ term :math:`b`
675
675
equivalence between the amount of regularization of two models depends on
676
676
the exact objective function optimized by the model. For example, when the
677
677
estimator used is :class: `~sklearn.linear_model.Ridge ` regression,
678
- the relation between them is given as :math: `C = \frac {1 }{alpha}`.
678
+ the relation between them is given as :math: `C = \frac {1 }{\ alpha }`.
679
679
680
680
.. dropdown :: LinearSVC
681
681
@@ -801,15 +801,16 @@ used, please refer to their respective papers.
801
801
802
802
.. [#5 ] Bishop, `Pattern recognition and machine learning
803
803
<https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf> `_,
804
- chapter 7 Sparse Kernel Machines
804
+ chapter 7 Sparse Kernel Machines.
805
805
806
806
.. [#6 ] :doi: `"A Tutorial on Support Vector Regression"
807
807
<10.1023/B:STCO.0000035301.49549.88> `
808
808
Alex J. Smola, Bernhard Schölkopf - Statistics and Computing archive
809
809
Volume 14 Issue 3, August 2004, p. 199-222.
810
810
811
811
.. [#7 ] Schölkopf et. al `New Support Vector Algorithms
812
- <https://www.stat.purdue.edu/~yuzhu/stat598m3/Papers/NewSVM.pdf> `_
812
+ <https://www.stat.purdue.edu/~yuzhu/stat598m3/Papers/NewSVM.pdf> `_,
813
+ Neural Computation 12, 1207-1245 (2000).
813
814
814
815
.. [#8 ] Crammer and Singer `On the Algorithmic Implementation of Multiclass
815
816
Kernel-based Vector Machines
0 commit comments