CART是分類與回歸樹(shù)(Classification and Regression Trees, CART)蚀同,是一棵二叉樹(shù)奔垦,可用于回歸與分類搬味。
下面是分類樹(shù):
class?sklearn.tree.DecisionTreeClassifier(criterion=’gini’,?splitter=’best’,?max_depth=None,?min_samples_split=2,?min_samples_leaf=1,?min_weight_fraction_leaf=0.0,?max_features=None,?random_state=None,?max_leaf_nodes=None,?min_impurity_decrease=0.0,?min_impurity_split=None,?class_weight=None,?presort=False)
Parameters:
1)criterion?:?string, optional (default=”gini”)
The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.
衡量生成樹(shù)的純度老赤,可選擇基尼系數(shù) 'gini'章郁,或者信息熵 'entropy'.
節(jié)點(diǎn) t 的基尼系數(shù)與信息熵計(jì)算公式:
? ? ? ? ? ? ? ? ?
? ? ? ? ? ? ? ? ??
2)splitter?:?string, optional (default=”best”)
The strategy used to choose the split at each node. Supported strategies are “best” to choose the best split and “random” to choose the best random split.
分裂點(diǎn)選擇枉氮,有兩種方式可選擇,'best' 與 'random'暖庄。
(聊替??培廓?)
3)max_depth?:?int or None, optional (default=None)
The maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
設(shè)置決策樹(shù)的最大深度惹悄。如果為None,停止條件為:
1)所有葉子節(jié)點(diǎn)不純度為0
2)葉子節(jié)點(diǎn)包含樣本個(gè)數(shù)低于 'min_samples_split'
4)min_samples_split?:?int, float, optional (default=2)
The minimum number of samples required to split an internal node:
If int, then consider?min_samples_split?as the minimum number.
If float, then?min_samples_split?is a fraction and?ceil(min_samples_split?*?n_samples)?are the minimum number of samples for each split.
Changed in version 0.18:?Added float values for fractions.
內(nèi)部節(jié)點(diǎn)包含的最少樣本數(shù)肩钠,可輸入整數(shù)或者小數(shù)泣港;即如果內(nèi)部節(jié)點(diǎn)包含樣本書(shū)低于這個(gè)值暂殖,則不再分裂,直接作為葉子節(jié)點(diǎn)当纱。
如果輸入整數(shù) d呛每,min_samples_split = d;
如果輸入小數(shù) f ,?min_samples_split = f * N惫东;N為樣本總數(shù)莉给。
5)min_samples_leaf?:?int, float, optional (default=1)
The minimum number of samples required to be at a leaf node. A split point at any depth will only be considered if it leaves at least?min_samples_leaf?training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.
If int, then consider?min_samples_leaf?as the minimum number.
If float, then?min_samples_leaf?is a fraction and?ceil(min_samples_leaf?*?n_samples)?are the minimum number of samples for each node.
Changed in version 0.18:?Added float values for fractions.
葉子節(jié)點(diǎn)包含的最少樣本數(shù)。
如果一個(gè)節(jié)點(diǎn)分裂后廉沮,左右子節(jié)點(diǎn)包含樣本書(shū)低于min_samples_leaf颓遏,則不可以分裂。
6)min_weight_fraction_leaf?:?float, optional (default=0.)
The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
葉子節(jié)點(diǎn)包含樣本的最小權(quán)重值滞时。
7)max_features?:?int, float, string or None, optional (default=None)
The number of features to consider when looking for the best split:
If int, then consider?max_features?features at each split.
If float, then?max_features?is a fraction and?int(max_features?*?n_features)features are considered at each split.
If “auto”, then?max_features=sqrt(n_features).
If “sqrt”, then?max_features=sqrt(n_features).
If “l(fā)og2”, then?max_features=log2(n_features).
If None, then?max_features=n_features.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than?max_features?features.
節(jié)點(diǎn)分裂時(shí)考慮的最大特征數(shù)量叁幢。
8)random_state?:?int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by?np.random.
隨機(jī)種子/隨機(jī)生成器。
9)max_leaf_nodes?:?int or None, optional (default=None)
Grow a tree with?max_leaf_nodes?in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
葉子節(jié)點(diǎn)最大個(gè)數(shù)坪稽。
10)min_impurity_decrease?:?float, optional (default=0.)
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
N_t? / N? *? (impurity? -? N_t_R? /? N_t? *? right_impurity??
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?-? ?N_t_L? /? N_t? *? left_impurity)
where?N?is the total number of samples,?N_t?is the number of samples at the current node,?N_t_L?is the number of samples in the left child, and?N_t_R?is the number of samples in the right child.
N,?N_t,?N_t_R?and?N_t_L?all refer to the weighted sum, if?sample_weight?is passed.
New in version 0.19.
不純度最小減少值曼玩,即分裂節(jié)點(diǎn)時(shí)必須滿足減少的不純度大于這個(gè)值。
如果設(shè)置了樣本權(quán)重窒百,N,?N_t,?N_t_R?and?N_t_L指的是加權(quán)和黍判,否則只是樣本計(jì)數(shù)。
11)min_impurity_split?:?float, (default=1e-7)
Threshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf.
Deprecated since version 0.19:?min_impurity_split?has been deprecated in favor of min_impurity_decrease?in 0.19. The default value of?min_impurity_split?will change from 1 e-7 to 0 in 0.23 and it will be removed in 0.25. Use?min_impurity_decrease?instead.
節(jié)點(diǎn)停止分裂的不純度閾值篙梢。與上一參數(shù)作用相同顷帖。0.25版本后刪除。
12)class_weight?:?dict, list of dicts, “balanced” or None, default=None
Weights associated with classes in the form?{class_label:?weight}. If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y.
Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}].
The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as?n_samples?/?(n_classes?*?np.bincount(y))
For multi-output, the weights of each column of y will be multiplied.
Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.
設(shè)置類別權(quán)重渤滞,參數(shù) dict, list of dicts贬墩,'balanced'? 或者 'None'
輸入list, list of dict,手動(dòng)設(shè)置類別權(quán)重妄呕。
輸入 'balanced', 模型利用公式 n_samples?/?(n_classes?*?np.bincount(y)) 對(duì)每個(gè)類別樣本數(shù)量自動(dòng)調(diào)整權(quán)重陶舞,使得每個(gè)類別的權(quán)重相同。即某個(gè)類別樣本數(shù)量少绪励,單個(gè)樣本權(quán)重就大肿孵;類別樣本數(shù)量多,單個(gè)樣本權(quán)重就小疏魏。最終所有類別的權(quán)重都一樣停做。當(dāng)樣本分布不均時(shí),可以這樣操作蠢护。
輸入 'None', 默認(rèn)樣本權(quán)重都一樣雅宾。
13)presort?:?bool, optional (default=False)
Whether to presort the data to speed up the finding of best splits in fitting. For the default settings of a decision tree on large datasets, setting this to true may slow down the training process. When using either a smaller dataset or a restricted depth, this may speed up the training.
是否對(duì)樣本進(jìn)行預(yù)分類养涮。
即節(jié)點(diǎn)尋找最優(yōu)分裂點(diǎn)之前葵硕,對(duì)樣本進(jìn)行預(yù)排序眉抬,加快找到最優(yōu)分裂點(diǎn);由于增加了一步操作懈凹,數(shù)據(jù)集小的時(shí)候可以加快速度蜀变;但數(shù)據(jù)集大的時(shí)候反而會(huì)減慢速度。
Attributes:
1)classes_?:?array of shape = [n_classes] or a list of such arrays
The classes labels (single output problem), or a list of arrays of class labels (multi-output problem).
標(biāo)簽列表
2)feature_importances_?:?array of shape = [n_features]
Return the feature importances.
特征重要系數(shù)
3)max_features_?:?int,
The inferred value of max_features.
特征數(shù)量
4)n_classes_?:?int or list
The number of classes (for single output problems), or a list containing the number of classes for each output (for multi-output problems).
5)n_features_?:?int
The number of features when?fit?is performed.
訓(xùn)練時(shí)用到的特征數(shù)量
(介评?库北??)
6)n_outputs_?:?int
The number of outputs when?fit?is performed.
7)tree_?:?Tree object
The underlying Tree object. Please refer to?help(sklearn.tree._tree.Tree)?for attributes of Tree object and?Understanding the decision tree structure?for basic usage of these attributes.
返回決策樹(shù)對(duì)象(sklearn.tree._tree.Tree)