0
点赞
收藏
分享

微信扫一扫

梯度下降算法详解

工程与房产肖律师 2022-06-06 阅读 104
#coding:utf-8
import numpy as np

w = 2
b = 2
lr = 0.01 # 学习率
notInc = 0 # 为增长次数
MAX_NOT_INC = 10

x = np.array([1, 3, 10, 100, -1, -10], dtype=np.float32)
y = np.array([2, 7, 19, 190, -2, -20], dtype=np.float32)


n = len(x)
_2n = 2*n
def model(x):
return w*x + b

def loss(x, y):
dist = np.sqrt(np.sum(np.square(model(x)-y))) / _2n
return dist

# 导数
def loss_grad(x, y):
return np.sum(model(x) - y) / n

# print(loss(x, y))
step = 0
lastL = -np.inf
while True:
grad = loss_grad(x, y)
# print(grad)

w -= lr * np.mean(grad/x)
b -= lr * grad
l = loss(x, y)

if l > lastL:
notInc = 0
else:
notInc += 1

# print (w,b)
if step % 100 == 0:
print(l)

if notInc >= MAX_NOT_INC:
break
lastL = l

print (w, b, loss(x, y), model(10000), model(-20000))

这个训练结果还是比较满意的,但是有个问题,特征中,不能为0, 否则就会有除0错误。 

 

举报

相关推荐

0 条评论