定义
ID3算法
输入:训练数据集(T= { ( x 1 , y 1 ) , ( x 2 , y 2 ) , ⋯ , ( x N , y N ) } \left\{(x_1,y_1),(x_2,y_2),\cdots,(x_N,y_N)\right\} {(x1,y1),(x2,y2),⋯,(xN,yN)}),特征集A阀值 ε \varepsilon ε
输出:决策树T
(1)若D中所有实例属于同一类 C k C_k Ck,则T为单节点树,并将 C k C_k Ck作为该节点的类标记,返回T;
(2)若A= ∅ \varnothing ∅,则T为单节点树,并将D中实例数最大的类 C k C_k Ck作为该节点的类标记,返回T;
(3)否则,需要计算A中各特征对D的信息增益,选择信息增益最大的特征 A g A_g Ag;
(4)如果 A g A_g Ag的信息增益小于阀值 ε \varepsilon ε,则置T为单节点树,并将D中实例数最大的类 C k C_k Ck作为该节点的类标记,返回T;
(5)否则,对 A g A_g Ag的每一可能值 a i a_i ai,依 A g = a i A_g=a_i Ag=ai将D分割为若干非空子集 D i D_i Di,将 D i D_i Di中实例最大的类作为标记,构建子节点,由节点及其子节点构成树T,返回T;
(6)对第i个字节点,以 D i D_i Di为训练集,以 A − A g A-{A_g} A−Ag为特征集,递归地调用步骤(1)~(5),得到子树 T i T_i Ti,返回 T i T_i Ti
输入空间
T= { ( x 1 , y 1 ) , ( x 2 , y 2 ) , … , ( x N , y N ) } \left\{(x_1,y_1),(x_2,y_2),\dots,(x_N,y_N)\right\} {(x1,y1),(x2,y2),…,(xN,yN)}
import numpy as npdef loadData(fileName,lines=60000):'''加载文件:param fileName:要加载的文件路径 下载地址:https://download.csdn.net/download/nanxiaotao/89720991):return: 数据集和标签集'''#存放数据及标记dataArr = []; labelArr = []#读取文件fr = open(fileName)#遍历文件中的每一行for line in fr.readlines():curLine = line.strip().split(',')dataArr.append([int(int(num) > 128) for num in curLine[1:]])labelArr.append(int(curLine[0]))#返回数据集和标记return dataArr, labelArr
# 获取训练集
trainDataList, trainLabelList = loadData('../Mnist/mnist_train.csv')
np.shape(trainDataList)
Epsilon = 0.1
特征空间(Feature Space)
trainDataList[0][0:784]
统计学习方法
模型
决策树 T T T
策略
m a x ( g ( D , A ) ) max(g(D,A)) max(g(D,A))
算法
H ( D ) = − ∑ k = 1 K ∣ C k ∣ ∣ D ∣ l o g 2 ∣ C k ∣ ∣ D ∣ H(D)=-\sum_{k=1}^K \dfrac{ \left| C_k \right| }{ \left| D \right|}log_2 \dfrac{ \left| C_k \right| }{ \left| D \right|} H(D)=−∑k=1K∣D∣∣Ck∣log2∣D∣∣Ck∣
def calc_H_D(trainLabelArr):'''计算数据集D的经验熵:param trainLabelArr:当前数据集的标签集:return: 经验熵'''#初始化为0H_D = 0trainLabelSet = set([label for label in trainLabelArr])#遍历每一个出现过的标签for i in trainLabelSet:p = trainLabelArr[trainLabelArr == i].size / trainLabelArr.size#对经验熵的每一项累加求和H_D += -1 * p * np.log2(p)#返回经验熵return H_D
H ( D ∣ A ) = ∑ i = 1 n ∣ D i ∣ ∣ D ∣ H ( D i ) H(D|A)=\sum_{i=1}^n \dfrac{ \left| D_i \right| }{ \left| D \right|}H(D_i) H(D∣A)=∑i=1n∣D∣∣Di∣H(Di)
def calcH_D_A(trainDataArr_DevFeature, trainLabelArr):'''计算经验条件熵:param trainDataArr_DevFeature:切割后只有feature那列数据的数组:param trainLabelArr: 标签集数组:return: 经验条件熵'''#初始为0H_D_A = 0#在featue那列放入集合中,是为了根据集合中的数目知道该feature目前可取值数目是多少trainDataSet = set([label for label in trainDataArr_DevFeature])#对于每一个特征取值遍历计算条件经验熵的每一项for i in trainDataSet:#计算H(D|A)#trainDataArr_DevFeature[trainDataArr_DevFeature == i].size / trainDataArr_DevFeature.size:|Di| / |D|#calc_H_D(trainLabelArr[trainDataArr_DevFeature == i]):H(Di)H_D_A += trainDataArr_DevFeature[trainDataArr_DevFeature == i].size / trainDataArr_DevFeature.size \* calc_H_D(trainLabelArr[trainDataArr_DevFeature == i])#返回得出的条件经验熵return H_D_A
g ( D , A ) = H ( D ) − H ( D ∣ A ) g(D,A)=H(D)-H(D|A) g(D,A)=H(D)−H(D∣A)
m a x ( g ( D , A ) ) max(g(D,A)) max(g(D,A))
def majorClass(labelArr):'''找到当前标签集中占数目最大的标签:param labelArr: 标签集:return: 最大的标签'''#建立字典,用于不同类别的标签技术classDict = {}#遍历所有标签for i in range(len(labelArr)):if labelArr[i] in classDict.keys():# 若在字典中存在该标签,则直接加1classDict[labelArr[i]] += 1else:#若无该标签,设初值为1,表示出现了1次了classDict[labelArr[i]] = 1#对字典依据值进行降序排序classSort = sorted(classDict.items(), key=lambda x: x[1], reverse=True)#返回最大一项的标签,即占数目最多的标签return classSort[0][0]
def getSubDataArr(trainDataArr, trainLabelArr, A, a):'''更新数据集和标签集:param trainDataArr:要更新的数据集:param trainLabelArr: 要更新的标签集:param A: 要去除的特征索引:param a: 当data[A]== a时,说明该行样本时要保留的:return: 新的数据集和标签集'''#返回的数据集retDataArr = []#返回的标签集retLabelArr = []#对当前数据的每一个样本进行遍历for i in range(len(trainDataArr)):#如果当前样本的特征为指定特征值aif trainDataArr[i][A] == a:#那么将该样本的第A个特征切割掉,放入返回的数据集中retDataArr.append(trainDataArr[i][0:A] + trainDataArr[i][A+1:])#将该样本的标签放入返回标签集中retLabelArr.append(trainLabelArr[i])#返回新的数据集和标签集return retDataArr, retLabelArr
def calcBestFeature(trainDataList, trainLabelList):'''计算信息增益最大的特征:param train_dataSet: 数据集:return: 信息增益最大的特征及最大信息增益值'''#将数据集和标签集转换为数组形式trainDataArr = np.array(trainDataList)trainLabelArr = np.array(trainLabelList)#获取当前特征数目,也就是数据集的横轴大小featureNum = trainDataArr.shape[1]#初始化最大信息增益maxG_D_A = -1#初始化最大信息增益的特征maxFeature = -1#1.计算数据集D的经验熵H(D)H_D = calc_H_D(trainLabelArr)#对每一个特征进行遍历计算for feature in range(featureNum):trainDataArr_DevideByFeature = np.array(trainDataArr[:, feature].flat)#计算信息增益G_D_A = H_D - calcH_D_A(trainDataArr_DevideByFeature, trainLabelArr)#不断更新最大的信息增益以及对应的featureif G_D_A > maxG_D_A:maxG_D_A = G_D_AmaxFeature = featurereturn maxFeature, maxG_D_A
创建决策树 T T T
def createTree(*dataSet):'''递归创建决策树:param dataSet:return:新的子节点或该叶子节点的值'''trainDataList = dataSet[0][0]trainLabelList = dataSet[0][1]classDict = {i for i in trainLabelList}if len(classDict) == 1:return trainLabelList[0]if len(trainDataList[0]) == 0:#返回当前标签集中占数目最大的标签return majorClass(trainLabelList)Ag, EpsilonGet = calcBestFeature(trainDataList, trainLabelList)#如果Ag的信息增益比小于阈值Epsilon,则置T为单节点树,并将D中实例数最大的类Ck#作为该节点的类,返回Tif EpsilonGet < Epsilon:return majorClass(trainLabelList)#否则,对Ag的每一可能值ai,依Ag=ai将D分割为若干非空子集Di,将Di中实例数最大的# 类作为标记,构建子节点,由节点及其子节点构成树T,返回TtreeDict = {Ag:{}}#特征值为0时,进入0分支#getSubDataArr(trainDataList, trainLabelList, Ag, 0):在当前数据集中切割当前feature,返回新的数据集和标签集treeDict[Ag][0] = createTree(getSubDataArr(trainDataList, trainLabelList, Ag, 0))treeDict[Ag][1] = createTree(getSubDataArr(trainDataList, trainLabelList, Ag, 1))return treeDict
# 获取测试集
testDataList, testLabelList = loadData('../Mnist/mnist_test.csv')#创建决策树
tree = createTree((trainDataList, trainLabelList))
print('tree is:', tree)
假设空间(Hypothesis Space)
{ f ∣ f ( x ) = m a x ( g ( D , A ) ) } \left\{f|f(x) = max(g(D,A)) \right\} {f∣f(x)=max(g(D,A))}
输出空间
y {\tt y} y = { c 1 , c 2 , ⋯ , c k } = \{c_1,c_2,\cdots,c_k \} ={c1,c2,⋯,ck}
模型评估
训练误差(Training Error)
testDataList, testLabelList = loadData('../Mnist/mnist_test.csv')
np.shape(testDataList)
def predict(testDataList, tree):'''预测标签:param testDataList:样本:param tree: 决策树:return: 预测结果'''while True:(key, value), = tree.items()#如果当前的value是字典,说明还需要遍历下去if type(tree[key]).__name__ == 'dict':dataVal = testDataList[key]del testDataList[key]tree = value[dataVal]if type(tree).__name__ == 'int':#返回该节点值,也就是分类值return treeelse:#如果当前value不是字典,那就返回分类值return value
def model_test(testDataList, testLabelList, tree):'''测试准确率:param testDataList:待测试数据集:param testLabelList: 待测试标签集:param tree: 训练集生成的树:return: 准确率'''errorCnt = 0for i in range(len(testDataList)):if testLabelList[i] != predict(testDataList[i], tree):errorCnt += 1#返回准确率return 1 - errorCnt / len(testDataList)
accur = model_test(testDataList, testLabelList, tree)
print('the accur is:', accur)