0
点赞
收藏
分享

微信扫一扫

LeetCode //C - 290. Word Pattern

老牛走世界 2023-08-14 阅读 76

西瓜数据集D如下:

编号色泽根蒂敲声纹理脐部触感好瓜
1青绿蜷缩浊响清晰凹陷硬滑
2乌黑蜷缩沉闷清晰凹陷硬滑
3乌黑蜷缩浊响清晰凹陷硬滑
4青绿蜷缩沉闷清晰凹陷硬滑
5浅白蜷缩浊响清晰凹陷硬滑
6青绿稍蜷浊响清晰稍凹软粘
7乌黑稍蜷浊响稍糊稍凹软粘
8乌黑稍蜷浊响清晰稍凹硬滑
9乌黑稍蜷沉闷稍糊稍凹硬滑
10青绿硬挺清脆清晰平坦软粘
11浅白硬挺清脆模糊平坦硬滑
12浅白蜷缩浊响模糊平坦软粘
13青绿稍蜷浊响稍糊凹陷硬滑
14浅白稍蜷沉闷稍糊凹陷硬滑
15乌黑稍蜷浊响清晰稍凹软粘
16浅白蜷缩浊响模糊平坦硬滑
17青绿蜷缩沉闷稍糊稍凹硬滑

即集合D为分类问题,分类瓜的好坏是一个二分类问题,故|y| =2 ,故只存在p1,p2

信息熵为衡量信息混乱程度的量
记好瓜比例为p1,坏瓜比例为p2

1. 若全是好瓜 , 则 p 1 = 1 , p 2 = 0 E n t ( D ) = − ∑ k = 1 ∣ y ∣ p k l o g 2 p k = − ( p 1 l o g 2 p 1 + p 2 l o g 2 p 2 ) = 1 ⋅ l o g 2 ⋅ 1 + 0 ⋅ l o g 2 ⋅ 0 = 0 2. 若全是好瓜 , 则 p 1 = 0 , p 2 = 1 E n t ( D ) = − ∑ k = 1 ∣ y ∣ p k l o g 2 p k = − ( p 1 l o g 2 p 1 + p 2 l o g 2 p 2 ) = 0 ⋅ l o g 2 ⋅ 0 + 1 ⋅ l o g 2 ⋅ 1 = 0 则完全不混乱为全是好瓜或全是坏瓜 , E n t ( D ) = 0 2. 若全是好坏瓜个一半 , 则 p 1 = 1 2 , p 2 = 1 2 E n t ( D ) = − ∑ k = 1 ∣ y ∣ p k l o g 2 p k = − ( p 1 l o g 2 p 1 + p 2 l o g 2 p 2 ) = − ( 1 2 ⋅ l o g 2 ⋅ 1 2 + 1 2 ⋅ l o g 2 ⋅ 1 2 ) = 1 则最混乱为 E n t ( D ) = 1 1.若全是好瓜,则p_1=1,p_2=0 \\ Ent(D) = -\sum\limits _{k=1}^{|y|}p_klog_2p_k \\= -(p_1log_2p_1 + p_2log_2p_2 ) \\=1\cdot log_2\cdot 1 + 0\cdot log_2\cdot 0 \\=0\\ 2.若全是好瓜,则p_1=0,p_2=1 \\ Ent(D) = -\sum\limits _{k=1}^{|y|}p_klog_2p_k \\= -(p_1log_2p_1 + p_2log_2p_2 ) \\=0\cdot log_2\cdot 0 + 1\cdot log_2\cdot 1 \\=0\\ 则完全不混乱为全是好瓜或全是坏瓜,Ent(D) = 0\\ 2.若全是好坏瓜个一半,则p_1=\frac12,p_2=\frac12 \\ Ent(D) = -\sum\limits _{k=1}^{|y|}p_klog_2p_k \\= -(p_1log_2p_1 + p_2log_2p_2 ) \\=-(\frac12\cdot log_2\cdot \frac12 + \frac12\cdot log_2\cdot \frac12 )\\=1\\ 则最混乱为Ent(D) = 1 1.若全是好瓜,p1=1,p2=0Ent(D)=k=1ypklog2pk=(p1log2p1+p2log2p2)=1log21+0log20=02.若全是好瓜,p1=0,p2=1Ent(D)=k=1ypklog2pk=(p1log2p1+p2log2p2)=0log20+1log21=0则完全不混乱为全是好瓜或全是坏瓜,Ent(D)=02.若全是好坏瓜个一半,p1=21,p2=21Ent(D)=k=1ypklog2pk=(p1log2p1+p2log2p2)=(21log221+21log221)=1则最混乱为Ent(D)=1

当前样本集合D中第k类样本所占比例为pk(k=1,2,3,…,|y|),则D的信息熵为:

E n t ( D ) = − ∑ k = 1 ∣ y ∣ p k l o g 2 p k Ent(D) = -\sum\limits _{k=1}^{|y|}p_klog_2p_k Ent(D)=k=1ypklog2pk

信息增益为:

G a i n ( D , a ) = E n t ( D ) − ∑ v = 1 V ∣ D v ∣ ∣ D ∣ E n t ( D v ) Gain(D,a) = Ent(D) - \sum\limits _{v=1}^V \frac{|Dv|}{|D|}Ent(D^v) Gain(D,a)=Ent(D)v=1VDDvEnt(Dv)

import math
D = [
['青绿','蜷缩','浊响','清晰','凹陷','硬滑','是'],
['乌黑','蜷缩','沉闷','清晰','凹陷','硬滑','是'],
['乌黑','蜷缩','浊响','清晰','凹陷','硬滑','是'],
['青绿','蜷缩','沉闷','清晰','凹陷','硬滑','是'],
['浅白','蜷缩','浊响','清晰','凹陷','硬滑','是'],
['青绿','稍蜷','浊响','清晰','稍凹','软粘','是'],
['乌黑','稍蜷','浊响','稍糊','稍凹','软粘','是'],
['乌黑','稍蜷','浊响','清晰','稍凹','硬滑','是'],
['乌黑','稍蜷','沉闷','稍糊','稍凹','硬滑','否'],
['青绿','硬挺','清脆','清晰','平坦','软粘','否'],
['浅白','硬挺','清脆','模糊','平坦','硬滑','否'],
['浅白','蜷缩','浊响','模糊','平坦','软粘','否'],
['青绿','稍蜷','浊响','稍糊','凹陷','硬滑','否'],
['浅白','稍蜷','沉闷','稍糊','凹陷','硬滑','否'],
['乌黑','稍蜷','浊响','清晰','稍凹','软粘','否'],
['浅白','蜷缩','浊响','模糊','平坦','硬滑','否'],
['青绿','蜷缩','沉闷','稍糊','稍凹','硬滑','否']
]
A = ['色泽','根蒂','敲声','纹理','脐部','触感','好瓜']

# 当前样本集合D中第k类样本所占比例为pk(k=1,2,3,…,|y|)
# 计算A的信息熵,以数据最后一列为分类
def getEnt(D):
    # 获取一个类型k->出现次数的map
    kMap = dict()
    for dLine in D:
        # 获取分类值k
        k = dLine[len(dLine) - 1]
        # 获取当前k出现的次数
        kNum = kMap.get(k)
        if  kNum is None:
            kMap[k] = 1
        else:
            kMap[k] = kNum + 1
    # 遍历map
    dLen = len(D)
    rs = 0
    for kk in kMap:
        pk = kMap[kk]/dLen
        rs = rs + pk * math.log2(pk)
    return -rs


# 求信息增益,aIndex为属性列号
def getGain(D,aIndex):
    dMap = dict()
    for dLine in D:
        # 获取属性
        k = dLine[aIndex]
        # 属性所属的数组
        dChildren = dMap.get(k)
        if  dChildren is None:
            dChildren = []
            dMap[k] = dChildren
        dChildren.append(dLine)
    rs = 0    
    for key in dMap:
        dChildren = dMap[key]
        entx = getEnt(dChildren)
        print(entx)
        r = len(dChildren)/len(D) * entx
        rs = rs + r
    return getEnt(D) - rs
举报

相关推荐

0 条评论