Home | Trees | Indices | Help |
|
---|
|
Linear Discriminant Analysis This node has been automatically generated by wrapping the ``sklearn.discriminant_analysis.LinearDiscriminantAnalysis`` class from the ``sklearn`` library. The wrapped instance can be accessed through the ``scikits_alg`` attribute. A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes' rule. The model fits a Gaussian density to each class, assuming that all classes share the same covariance matrix. The fitted model can also be used to reduce the dimensionality of the input by projecting it to the most discriminative directions. .. versionadded:: 0.17 *LinearDiscriminantAnalysis*. .. versionchanged:: 0.17 Deprecated :class:`lda.LDA` have been moved to *LinearDiscriminantAnalysis*. **Parameters** solver : string, optional Solver to use, possible values: - - 'svd': Singular value decomposition (default). Does not compute the - covariance matrix, therefore this solver is recommended for - data with a large number of features. - - 'lsqr': Least squares solution, can be combined with shrinkage. - - 'eigen': Eigenvalue decomposition, can be combined with shrinkage. shrinkage : string or float, optional Shrinkage parameter, possible values: - - None: no shrinkage (default). - - 'auto': automatic shrinkage using the Ledoit-Wolf lemma. - - float between 0 and 1: fixed shrinkage parameter. Note that shrinkage works only with 'lsqr' and 'eigen' solvers. priors : array, optional, shape (n_classes,) Class priors. n_components : int, optional Number of components (< n_classes - 1) for dimensionality reduction. store_covariance : bool, optional Additionally compute class covariance matrix (default False). .. versionadded:: 0.17 tol : float, optional Threshold used for rank estimation in SVD solver. .. versionadded:: 0.17 **Attributes** ``coef_`` : array, shape (n_features,) or (n_classes, n_features) Weight vector(s). ``intercept_`` : array, shape (n_features,) Intercept term. ``covariance_`` : array-like, shape (n_features, n_features) Covariance matrix (shared by all classes). ``explained_variance_ratio_`` : array, shape (n_components,) Percentage of variance explained by each of the selected components. If ``n_components`` is not set then all components are stored and the sum of explained variances is equal to 1.0. Only available when eigen solver is used. ``means_`` : array-like, shape (n_classes, n_features) Class means. ``priors_`` : array-like, shape (n_classes,) Class priors (sum to 1). ``scalings_`` : array-like, shape (rank, n_classes - 1) Scaling of the features in the space spanned by the class centroids. ``xbar_`` : array-like, shape (n_features,) Overall mean. ``classes_`` : array-like, shape (n_classes,) Unique class labels. See also sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis: Quadratic Discriminant Analysis **Notes** The default solver is 'svd'. It can perform both classification and transform, and it does not rely on the calculation of the covariance matrix. This can be an advantage in situations where the number of features is large. However, the 'svd' solver cannot be used with shrinkage. The 'lsqr' solver is an efficient algorithm that only works for classification. It supports shrinkage. The 'eigen' solver is based on the optimization of the between class scatter to within class scatter ratio. It can be used for both classification and transform, and it supports shrinkage. However, the 'eigen' solver needs to compute the covariance matrix, so it might not be suitable for situations with a high number of features. **Examples** >>> import numpy as np >>> from sklearn.discriminant_analysis import LinearDiscriminantAnalysis >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> clf = LinearDiscriminantAnalysis() >>> clf.fit(X, y) LinearDiscriminantAnalysis(n_components=None, priors=None, shrinkage=None, solver='svd', store_covariance=False, tol=0.0001) >>> print(clf.predict([[-0.8, -1]])) [1]
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from Inherited from |
|||
Inherited from ClassifierCumulator | |||
---|---|---|---|
|
|||
|
|||
|
|||
Inherited from ClassifierNode | |||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Node | |||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|
|||
|
|||
|
|
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
_train_seq List of tuples: |
|||
dtype dtype |
|||
input_dim Input dimensions |
|||
output_dim Output dimensions |
|||
supported_dtypes Supported dtypes |
|
Linear Discriminant Analysis This node has been automatically generated by wrapping the ``sklearn.discriminant_analysis.LinearDiscriminantAnalysis`` class from the ``sklearn`` library. The wrapped instance can be accessed through the ``scikits_alg`` attribute. A classifier with a linear decision boundary, generated by fitting class conditional densities to the data and using Bayes' rule. The model fits a Gaussian density to each class, assuming that all classes share the same covariance matrix. The fitted model can also be used to reduce the dimensionality of the input by projecting it to the most discriminative directions. .. versionadded:: 0.17 *LinearDiscriminantAnalysis*. .. versionchanged:: 0.17 Deprecated :class:`lda.LDA` have been moved to *LinearDiscriminantAnalysis*. **Parameters** solver : string, optional Solver to use, possible values: - - 'svd': Singular value decomposition (default). Does not compute the - covariance matrix, therefore this solver is recommended for - data with a large number of features. - - 'lsqr': Least squares solution, can be combined with shrinkage. - - 'eigen': Eigenvalue decomposition, can be combined with shrinkage. shrinkage : string or float, optional Shrinkage parameter, possible values: - - None: no shrinkage (default). - - 'auto': automatic shrinkage using the Ledoit-Wolf lemma. - - float between 0 and 1: fixed shrinkage parameter. Note that shrinkage works only with 'lsqr' and 'eigen' solvers. priors : array, optional, shape (n_classes,) Class priors. n_components : int, optional Number of components (< n_classes - 1) for dimensionality reduction. store_covariance : bool, optional Additionally compute class covariance matrix (default False). .. versionadded:: 0.17 tol : float, optional Threshold used for rank estimation in SVD solver. .. versionadded:: 0.17 **Attributes** ``coef_`` : array, shape (n_features,) or (n_classes, n_features) Weight vector(s). ``intercept_`` : array, shape (n_features,) Intercept term. ``covariance_`` : array-like, shape (n_features, n_features) Covariance matrix (shared by all classes). ``explained_variance_ratio_`` : array, shape (n_components,) Percentage of variance explained by each of the selected components. If ``n_components`` is not set then all components are stored and the sum of explained variances is equal to 1.0. Only available when eigen solver is used. ``means_`` : array-like, shape (n_classes, n_features) Class means. ``priors_`` : array-like, shape (n_classes,) Class priors (sum to 1). ``scalings_`` : array-like, shape (rank, n_classes - 1) Scaling of the features in the space spanned by the class centroids. ``xbar_`` : array-like, shape (n_features,) Overall mean. ``classes_`` : array-like, shape (n_classes,) Unique class labels. See also sklearn.discriminant_analysis.QuadraticDiscriminantAnalysis: Quadratic Discriminant Analysis **Notes** The default solver is 'svd'. It can perform both classification and transform, and it does not rely on the calculation of the covariance matrix. This can be an advantage in situations where the number of features is large. However, the 'svd' solver cannot be used with shrinkage. The 'lsqr' solver is an efficient algorithm that only works for classification. It supports shrinkage. The 'eigen' solver is based on the optimization of the between class scatter to within class scatter ratio. It can be used for both classification and transform, and it supports shrinkage. However, the 'eigen' solver needs to compute the covariance matrix, so it might not be suitable for situations with a high number of features. **Examples** >>> import numpy as np >>> from sklearn.discriminant_analysis import LinearDiscriminantAnalysis >>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]]) >>> y = np.array([1, 1, 1, 2, 2, 2]) >>> clf = LinearDiscriminantAnalysis() >>> clf.fit(X, y) LinearDiscriminantAnalysis(n_components=None, priors=None, shrinkage=None, solver='svd', store_covariance=False, tol=0.0001) >>> print(clf.predict([[-0.8, -1]])) [1]
|
|
|
Transform the data and labels lists to array objects and reshape them.
|
|
|
Predict class labels for samples in X. This node has been automatically generated by wrapping the sklearn.discriminant_analysis.LinearDiscriminantAnalysis class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters
Returns
|
Fit LinearDiscriminantAnalysis model according to the given training data and parameters. This node has been automatically generated by wrapping the ``sklearn.discriminant_analysis.LinearDiscriminantAnalysis`` class from the ``sklearn`` library. The wrapped instance can be accessed through the ``scikits_alg`` attribute. .. versionchanged:: 0.17 Deprecated *store_covariance* have been moved to main constructor. .. versionchanged:: 0.17 Deprecated *tol* have been moved to main constructor. **Parameters** X : array-like, shape (n_samples, n_features) Training data. y : array, shape (n_samples,) Target values.
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Tue Mar 8 12:39:48 2016 | http://epydoc.sourceforge.net |