Home | Trees | Indices | Help |
|
---|
|
An extra-trees classifier. This node has been automatically generated by wrapping the ``sklearn.ensemble.forest.ExtraTreesClassifier`` class from the ``sklearn`` library. The wrapped instance can be accessed through the ``scikits_alg`` attribute. This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. Read more in the :ref:`User Guide <forest>`. **Parameters** n_estimators : integer, optional (default=10) The number of trees in the forest. criterion : string, optional (default="gini") The function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gain. Note: this parameter is tree-specific. max_features : int, float, string or None, optional (default="auto") The number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split. - If float, then `max_features` is a percentage and `int(max_features * n_features)` features are considered at each split. - If "auto", then `max_features=sqrt(n_features)`. - If "sqrt", then `max_features=sqrt(n_features)`. - If "log2", then `max_features=log2(n_features)`. - If None, then `max_features=n_features`. Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than ``max_features`` features. Note: this parameter is tree-specific. max_depth : integer or None, optional (default=None) The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. Ignored if ``max_leaf_nodes`` is not None. Note: this parameter is tree-specific. min_samples_split : integer, optional (default=2) The minimum number of samples required to split an internal node. Note: this parameter is tree-specific. min_samples_leaf : integer, optional (default=1) The minimum number of samples in newly created leaves. A split is discarded if after the split, one of the leaves would contain less then ``min_samples_leaf`` samples. Note: this parameter is tree-specific. min_weight_fraction_leaf : float, optional (default=0.) The minimum weighted fraction of the input samples required to be at a leaf node. Note: this parameter is tree-specific. max_leaf_nodes : int or None, optional (default=None) Grow trees with ``max_leaf_nodes`` in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes. If not None then ``max_depth`` will be ignored. Note: this parameter is tree-specific. bootstrap : boolean, optional (default=False) Whether bootstrap samples are used when building trees. oob_score : bool Whether to use out-of-bag samples to estimate the generalization error. n_jobs : integer, optional (default=1) The number of jobs to run in parallel for both `fit` and `predict`. If -1, then the number of jobs is set to the number of cores. random_state : int, RandomState instance or None, optional (default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. verbose : int, optional (default=0) Controls the verbosity of the tree building process. warm_start : bool, optional (default=False) When set to ``True``, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. class_weight : dict, list of dicts, "balanced", "balanced_subsample" or None, optional Weights associated with classes in the form ``{class_label: weight}``. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y. The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` The "balanced_subsample" mode is the same as "balanced" except that weights are computed based on the bootstrap sample for every tree grown. For multi-output, the weights of each column of y will be multiplied. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. **Attributes** ``estimators_`` : list of DecisionTreeClassifier The collection of fitted sub-estimators. ``classes_`` : array of shape = [n_classes] or a list of such arrays The classes labels (single output problem), or a list of arrays of class labels (multi-output problem). ``n_classes_`` : int or list The number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem). ``feature_importances_`` : array of shape = [n_features] The feature importances (the higher, the more important the feature). ``n_features_`` : int The number of features when ``fit`` is performed. ``n_outputs_`` : int The number of outputs when ``fit`` is performed. ``oob_score_`` : float Score of the training dataset obtained using an out-of-bag estimate. ``oob_decision_function_`` : array of shape = [n_samples, n_classes] Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case, `oob_decision_function_` might contain NaN. **References** .. [1] P. Geurts, D. Ernst., and L. Wehenkel, "Extremely randomized trees", Machine Learning, 63(1), 3-42, 2006. See also sklearn.tree.ExtraTreeClassifier : Base classifier for this ensemble. RandomForestClassifier : Ensemble Classifier based on trees with optimal splits.
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from Inherited from |
|||
Inherited from ClassifierCumulator | |||
---|---|---|---|
|
|||
|
|||
|
|||
Inherited from ClassifierNode | |||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Node | |||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|
|||
|
|||
|
|
|||
Inherited from |
|||
Inherited from Node | |||
---|---|---|---|
_train_seq List of tuples: |
|||
dtype dtype |
|||
input_dim Input dimensions |
|||
output_dim Output dimensions |
|||
supported_dtypes Supported dtypes |
|
An extra-trees classifier. This node has been automatically generated by wrapping the ``sklearn.ensemble.forest.ExtraTreesClassifier`` class from the ``sklearn`` library. The wrapped instance can be accessed through the ``scikits_alg`` attribute. This class implements a meta estimator that fits a number of randomized decision trees (a.k.a. extra-trees) on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. Read more in the :ref:`User Guide <forest>`. **Parameters** n_estimators : integer, optional (default=10) The number of trees in the forest. criterion : string, optional (default="gini") The function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gain. Note: this parameter is tree-specific. max_features : int, float, string or None, optional (default="auto") The number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split. - If float, then `max_features` is a percentage and `int(max_features * n_features)` features are considered at each split. - If "auto", then `max_features=sqrt(n_features)`. - If "sqrt", then `max_features=sqrt(n_features)`. - If "log2", then `max_features=log2(n_features)`. - If None, then `max_features=n_features`. Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than ``max_features`` features. Note: this parameter is tree-specific. max_depth : integer or None, optional (default=None) The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. Ignored if ``max_leaf_nodes`` is not None. Note: this parameter is tree-specific. min_samples_split : integer, optional (default=2) The minimum number of samples required to split an internal node. Note: this parameter is tree-specific. min_samples_leaf : integer, optional (default=1) The minimum number of samples in newly created leaves. A split is discarded if after the split, one of the leaves would contain less then ``min_samples_leaf`` samples. Note: this parameter is tree-specific. min_weight_fraction_leaf : float, optional (default=0.) The minimum weighted fraction of the input samples required to be at a leaf node. Note: this parameter is tree-specific. max_leaf_nodes : int or None, optional (default=None) Grow trees with ``max_leaf_nodes`` in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes. If not None then ``max_depth`` will be ignored. Note: this parameter is tree-specific. bootstrap : boolean, optional (default=False) Whether bootstrap samples are used when building trees. oob_score : bool Whether to use out-of-bag samples to estimate the generalization error. n_jobs : integer, optional (default=1) The number of jobs to run in parallel for both `fit` and `predict`. If -1, then the number of jobs is set to the number of cores. random_state : int, RandomState instance or None, optional (default=None) If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`. verbose : int, optional (default=0) Controls the verbosity of the tree building process. warm_start : bool, optional (default=False) When set to ``True``, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. class_weight : dict, list of dicts, "balanced", "balanced_subsample" or None, optional Weights associated with classes in the form ``{class_label: weight}``. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y. The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` The "balanced_subsample" mode is the same as "balanced" except that weights are computed based on the bootstrap sample for every tree grown. For multi-output, the weights of each column of y will be multiplied. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. **Attributes** ``estimators_`` : list of DecisionTreeClassifier The collection of fitted sub-estimators. ``classes_`` : array of shape = [n_classes] or a list of such arrays The classes labels (single output problem), or a list of arrays of class labels (multi-output problem). ``n_classes_`` : int or list The number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem). ``feature_importances_`` : array of shape = [n_features] The feature importances (the higher, the more important the feature). ``n_features_`` : int The number of features when ``fit`` is performed. ``n_outputs_`` : int The number of outputs when ``fit`` is performed. ``oob_score_`` : float Score of the training dataset obtained using an out-of-bag estimate. ``oob_decision_function_`` : array of shape = [n_samples, n_classes] Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case, `oob_decision_function_` might contain NaN. **References** .. [1] P. Geurts, D. Ernst., and L. Wehenkel, "Extremely randomized trees", Machine Learning, 63(1), 3-42, 2006. See also sklearn.tree.ExtraTreeClassifier : Base classifier for this ensemble. RandomForestClassifier : Ensemble Classifier based on trees with optimal splits.
|
|
|
Transform the data and labels lists to array objects and reshape them.
|
|
|
Predict class for X. This node has been automatically generated by wrapping the sklearn.ensemble.forest.ExtraTreesClassifier class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. The predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. That is, the predicted class is the one with highest mean probability estimate across the trees. Parameters
Returns
|
Build a forest of trees from the training set (X, y). This node has been automatically generated by wrapping the sklearn.ensemble.forest.ExtraTreesClassifier class from the sklearn library. The wrapped instance can be accessed through the scikits_alg attribute. Parameters
Returns
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Tue Mar 8 12:39:48 2016 | http://epydoc.sourceforge.net |