0
点赞
收藏
分享

微信扫一扫

JavaWeb之国际化

萧让听雪 03-02 20:00 阅读 2

1. 增量学习

增量学习作为机器学习的一种方法,现阶段得到广泛的关注。在增量学习中,输入数据不断被用于

扩展现有模型的知识,即进一步训练模型,它代表了一种动态的学习的技术。对于满足以下条件的

学习方法可以定义为增量学习方法:可以学习新的信息中的有用信息;不需要访问已经用于训练分

类器的原始数据;对已经学习的知识具有记忆功能;在面对新数据中包含的新类别时,可以有效地

进行处理。许多机器学习的算法可以应用增量学习例如:决策树,规则学习、神经网络(RBF

networks,Learn++Fuzzy ARTMAPTopoARTIGNG以及增量SVM等。learn++算法是一种

适用于监督学习的、集成的、增量学习的、能学习新类的算法。

增量算法经常应用于对数据流或大数据的处理,比如对股票趋势的预测和用户偏好的分析等。在这

些数据流中,新的数据可以持续地输入到模型中来完善模型。此外,将增量学习应用于聚类问题,

维度约减,特征选择,数据表示强化学习,数据挖掘等等。随着数据库以及互联网技术的快速发展

和广泛应用,社会各部门积累了海量数据,而且这些数据量每天都在快速增加。通过使用增量学习

的方式可以有效的利用新增数据来对模型进行训练和进一步完善。此外,通过使用增量学习的方法

可以从系统层面上更好地理解和模仿人脑学习方式和生物神经网络的构成机制,为开发新计算模型

和有效学习算法提供技术基础。

假设有200条数据,第一次训练150条,第二次训练50条,和直接用200条训练的差异在于:在第二

次训练50条时,前150条数据已经不存在了,模型更拟合于后面的数据。如果我们定期增量训练,

那么离当前时间越近的数据对模型影响越大,这也是我们想要的结果。但如果最后一批数据质量非

常差,就可能覆盖之前的正确实例的训练结果,把模型带偏。同理,如果我们按时间把数据分成几

部分,然后按从早到晚的顺序多次训练模型,每个模型在上一个模型基础上训练,也间接地参加了

后期实例的权重

2. Xgboost

Xgboost提供两种增量训练的方式,一种是在当前迭代树的基础上增加新树,原树不变;另一种是

当前迭代树结构不变,重新计算叶节点权重,同时也可增加新树。

import xgboost as xgb
from sklearn.datasets import load_digits # 训练数据
 
xgb_params_01 = {}
 
digits_2class = load_digits(2)
X_2class = digits_2class['data']
y_2class = digits_2class['target']
 
dtrain_2class = xgb.DMatrix(X_2class, label=y_2class) #加载数据
gbdt_03 = xgb.train(xgb_params_01, dtrain_2class, num_boost_round=3) # 训练三棵树的模型
print(gbdt_03.get_dump()) # 显示模型

gbdt_03a = xgb.train(xgb_params_01, dtrain_2class, num_boost_round=7, xgb_model=gbdt_03) # 在原模型基础上继续训练
print(gbdt_03a.get_dump())

使用XGBoost库来训练一个基于梯度提升决策树(Gradient Boosting Decision Tree)的分类模

型,并使用sklearn中的load_digits数据集作为训练数据。

load_digits函数用于加载手写数字数据集。

xgb_params_01 = {},初始化一个空的字典xgb_params_01,用于存储XGBoost模型的参数。在

这里,该字典为空,意味着将使用XGBoost的默认参数。

digits_2class = load_digits(2)使用load_digits函数加载手写数字数据集,并只选择其中的两类(数

字0和1)作为训练数据。

X_2class = digits_2class['data']从digits_2class中提取特征数据,并将其存储在X_2class中。

y_2class = digits_2class['target']从digits_2class中提取目标标签(即每个手写数字的实际值),并

将其存储在y_2class中。

dtrain_2class = xgb.DMatrix(X_2class, label=y_2class)使用XGBoost的DMatrix数据结构将特征数

据X_2class和目标标签y_2class转化为XGBoost可以识别的格式。

gbdt_03 = xgb.train(xgb_params_01, dtrain_2class, num_boost_round=3)使用XGBoost的train函

数训练一个模型。这里,我们使用前面定义的空参数字典xgb_params_01,训练数据

dtrain_2class,并指定训练3轮(即3棵决策树)。

print(gbdt_03.get_dump())打印训练得到的模型的结构和权重。get_dump()方法会返回一个字符

串,其中包含每棵决策树的结构和权重信息。

gbdt_03a = xgb.train(xgb_params_01, dtrain_2class, num_boost_round=7, xgb_model=gbdt_03)

在已经训练了3轮的模型gbdt_03的基础上,继续训练模型。这次,我们指定再训练7轮,总共训练

10轮(3轮 + 7轮)。

print(gbdt_03a.get_dump())打印继续训练后的模型的结构和权重。

3. sklearn

sklearn中提供了很多增量学习算法,虽然不是所有的算法都可以增量学习,但是学习器提供了

partial_fit的函数的都可以进行增量学习。增量学习也适用于数据量非常大的情景,使用小batch

数据中进行增量学习(有时候也称为online learning是增量学习方式的核心,能让任何一段时间

内内存中只有少量的数据。 

import numpy as np
# Finds a dictionary (a set of atoms) that can best be used to represent data using a sparse code.
from sklearn.decomposition import MiniBatchDictionaryLearning
# Linear dimensionality reduction using Singular Value Decomposition of centered data, 
# keeping only the most significant singular vectors to project the data to a lower dimensional space.
from sklearn.decomposition import IncrementalPCA
# Latent Dirichlet(狄利克雷) Allocation with online variational Bayes algorithm
from sklearn.decomposition import LatentDirichletAllocation

from sklearn.datasets import load_iris
from sklearn.datasets import load_digits

iris = load_iris()
X = iris.data
Y = iris.target

permutation = np.random.permutation(X.shape[0])
shuffled_X = X[permutation, :]
shuffled_Y = Y[permutation]

X_train = shuffled_X[:int(X.shape[0]*0.5),]
Y_train = shuffled_Y[:int(X.shape[0]*0.5)]

X_incr = shuffled_X[int(X.shape[0]*0.5):int(X.shape[0]*0.7),]
Y_incr = shuffled_Y[int(X.shape[0]*0.5):int(X.shape[0]*0.7)]

X_test = shuffled_X[int(X.shape[0]*0.7):,]
Y_test = shuffled_Y[int(X.shape[0]*0.7):]

print("shape of X_train:{}, {}".format(X_train.shape[0], X_train.shape[1]))
print("shape of X_incr:{}, {}".format(X_incr.shape[0], X_incr.shape[1]))
print("shape of X_test:{}, {}".format(X_test.shape[0], X_test.shape[1]))

# MiniBatchDictionaryLearning()
model = MiniBatchDictionaryLearning()
model.fit(X_train, Y_train)
# acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
# acc2 = model.score(X_test, Y_test)
# print("MiniBatchDictionaryLearning\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# IncrementalPCA()
model = IncrementalPCA()
model.fit(X_train, Y_train)
# acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
# acc2 = model.score(X_test, Y_test)
# # print("IncrementalPCA\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# LatentDirichletAllocation()
model = LatentDirichletAllocation()
model.fit(X_train, Y_train)
# acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
# acc2 = model.score(X_test, Y_test)
# print("LatentDirichletAllocation\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

加载鸢尾花数据集,并将其分为训练集、增量学习集和测试集。首先,它导入了所需的库和模块,

然后加载了鸢尾花数据集。然后,将数据集打乱并划分为三个子集:训练集(50%)、增量学习

集(20%)和测试集(30%)。最后,打印出每个子集的形状。 

使用三种不同的模型进行增量学习。首先,它使用MiniBatchDictionaryLearning()模型,然后使用

IncrementalPCA()模型,最后使用LatentDirichletAllocation()模型。在每个模型中,首先使用fit()方

法对训练数据进行拟合,然后使用partial_fit()方法对增量数据进行拟合。注释掉的代码部分是用于

计算和打印每个模型在初始阶段和增量学习后的准确率。

from sklearn.cluster import MiniBatchKMeans
import numpy as np
X = np.array([[1, 2], [1, 4], [1, 0],
                [4, 2], [4, 0], [4, 4],
                [4, 5], [0, 1], [2, 2],
                [3, 2], [5, 5], [1, -1]])
print(X.shape)
# manually fit on batches
kmeans = MiniBatchKMeans(n_clusters=2, random_state=0, batch_size=6)
kmeans = kmeans.partial_fit(X[0:6,:])
kmeans = kmeans.partial_fit(X[6:12,:])
print(kmeans.cluster_centers_)
print(kmeans.predict([[0, 0], [4, 4]]))
# fit on the whole data
kmeans = MiniBatchKMeans(n_clusters=2, random_state=0, batch_size=6, max_iter=10).fit(X)
print(kmeans.cluster_centers_)
print(kmeans.predict([[0, 0], [4, 4]]))

 

首先导入了MiniBatchKMeans类和numpy库。然后创建了一个包含12个样本的二维数组X。接下

来,通过调用MiniBatchKMeans类的构造函数来创建一个kmeans对象,其中指定了聚类数为2、随

机种子为0和每次迭代的批次大小为6。然后,通过调用partial_fit方法来手动分批拟合数据。首先

拟合前6个样本,然后再拟合后6个样本。最后,打印出聚类中心点和预测新数据点的所属簇。接

着,使用整个数据集进行拟合,通过调用fit方法来完成。同样地,打印出聚类中心点和预测新数据

点的所属簇。

import numpy as np

from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import BernoulliNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import PassiveAggressiveClassifier

# introduction of dataset : https://scikit-learn.org/stable/modules/classes.html#module-sklearn.datasets
from sklearn.datasets import load_iris
from sklearn.datasets import load_digits

iris = load_iris()
X = iris.data
Y = iris.target

permutation = np.random.permutation(X.shape[0])
shuffled_X = X[permutation, :]
shuffled_Y = Y[permutation]

X_train = shuffled_X[:int(X.shape[0]*0.2),]
Y_train = shuffled_Y[:int(X.shape[0]*0.2)]

X_incr = shuffled_X[int(X.shape[0]*0.2):int(X.shape[0]*0.7),]
Y_incr = shuffled_Y[int(X.shape[0]*0.2):int(X.shape[0]*0.7)]

X_test = shuffled_X[int(X.shape[0]*0.7):,]
Y_test = shuffled_Y[int(X.shape[0]*0.7):]

print("shape of X_train:{}, {}".format(X_train.shape[0], X_train.shape[1]))
print("shape of X_incr:{}, {}".format(X_incr.shape[0], X_incr.shape[1]))
print("shape of X_test:{}, {}".format(X_test.shape[0], X_test.shape[1]))

# MultinomialNB()
model = MultinomialNB()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("MultinomialNB for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# BernoulliNB()
model = BernoulliNB()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("BernoulliNB for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# Perceptron()
model = Perceptron()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("Perceptron for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# SGDClassifier()
model = SGDClassifier()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("SGDClassifier for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# PassiveAggressiveClassifier()
model = PassiveAggressiveClassifier()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("PassiveAggressiveClassifier for iris\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

digits = load_digits()
X = digits.data
Y = digits.target

permutation = np.random.permutation(X.shape[0])
shuffled_X = X[permutation, :]
shuffled_Y = Y[permutation]

X_train = shuffled_X[:int(X.shape[0]*0.2),]
Y_train = shuffled_Y[:int(X.shape[0]*0.2)]

X_incr = shuffled_X[int(X.shape[0]*0.2):int(X.shape[0]*0.7),]
Y_incr = shuffled_Y[int(X.shape[0]*0.2):int(X.shape[0]*0.7)]

X_test = shuffled_X[int(X.shape[0]*0.7):,]
Y_test = shuffled_Y[int(X.shape[0]*0.7):]

print("shape of X_train:{}, {}".format(X_train.shape[0], X_train.shape[1]))
print("shape of X_incr:{}, {}".format(X_incr.shape[0], X_incr.shape[1]))
print("shape of X_test:{}, {}".format(X_test.shape[0], X_test.shape[1]))

# MultinomialNB()
model = MultinomialNB()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("MultinomialNB for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# BernoulliNB()
model = BernoulliNB()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("BernoulliNB for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# Perceptron()
model = Perceptron()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("Perceptron for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# SGDClassifier()
model = SGDClassifier()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("SGDClassifier for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

# PassiveAggressiveClassifier()
model = PassiveAggressiveClassifier()
model.fit(X_train, Y_train)
acc1 = model.score(X_test, Y_test)
model.partial_fit(X_incr, Y_incr)
acc2 = model.score(X_test, Y_test)
print("PassiveAggressiveClassifier for digits\ninitial accuracy:{}, after incremental:{}".format(acc1, acc2))

用于比较不同分类器在两个数据集(鸢尾花数据集和手写数字数据集)上的表现。首先,加载鸢尾

花数据集和手写数字数据集,并将数据集划分为训练集、增量学习集和测试集。然后,使用多种分

类器(SGDClassifier、PassiveAggressiveClassifier、MultinomialNB、BernoulliNB和

Perceptron)分别对这两个数据集进行训练和评估。最后,计算每个分类器在初始训练集上的准确

率以及在增量学习后的准确率,并将结果打印出来。 

举报

相关推荐

0 条评论