自定义损失函数预测酸奶日销量

自定义损失函数:

如预测商品销量,预测多了,损失成本;预测少了,损失利润。

若利润不等于成本,没mse产生的loss无法利益最大化。

自定义损失函数   loss(y_,y) = \sum_{n}^{}f(y_, y)  , 其中y_是标准答案数据集的,y是预测答案计算出的

f(y_,y) = PROFIT  * (y_ - y)     y < y_   预测的y少了,损失利润PROFIT

或者f(y_,y) = COST * (y - y_)     y>=y_   预测的y多了损失成本COST   

loss_zdy = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_) * COST, (y_ - y) * PROFIT))

y > y_ ?   大于 COST部分,否则PROFIT部分

如预测酸奶销量,酸奶成本COST = 1元,酸奶利润PROFIT = 99元。

预测少了损失利润99元,大于预测多了损失成本1元。

预测少了损失大,希望生成的预测函数往多了预测。

import tensorflow as tf
import numpy as np

SEED = 23455
COST = 1
PROFIT = 99

rdm = np.random.RandomState(SEED)
x = rdm.rand(32, 2)
y_ = [[x1 + x2 + (rdm.rand() / 10.0 - 0.05)] for (x1, x2) in x] # 生成噪声[0,1)/10=[0,0.1); [0,0.1)-0.05=[-0.05,0.05)
x = tf.cast(x, dtype=tf.float32)

w1 = tf.Variable(tf.random.normal([2, 1], stddev=1, seed=1))

epoch = 10000
lr = 0.002

for epoch in range(epoch):
with tf.GradientTape() as tape:
y = tf.matmul(x, w1)
loss = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_) * COST, (y_ - y) * PROFIT))

grads = tape.gradient(loss, w1)
w1.assign_sub(lr * grads)

if epoch % 500 == 0:
print("After %d training steps,w1 is " % (epoch))
print(w1.numpy(), "\n")
print("Final w1 is: ", w1.numpy())

# 自定义损失函数
# 酸奶成本1元, 酸奶利润99元
# 成本很低,利润很高,人们希望多预测些,生成模型系数大于1,往多了预测

结果为Final w1 is:  [[1.1626338],[1.1191947]]

销量y=1.16x1 + 1.12x2, 模型的确在尽量往多了预测

After 0 training steps,w1 is 
[[2.0855925]
 [3.8476257]] 

After 500 training steps,w1 is 
[[1.1830755]
 [1.1627482]] 

After 1000 training steps,w1 is 
[[1.1526375]
 [1.0175619]] 

After 1500 training steps,w1 is 
[[1.1430174]
 [1.0488456]] 

After 2000 training steps,w1 is 
[[1.1333973]
 [1.0801294]] 

After 2500 training steps,w1 is 
[[1.1237769]
 [1.1114128]] 

After 3000 training steps,w1 is 
[[1.1727902]
 [1.1539897]] 

After 3500 training steps,w1 is 
[[1.1423521]
 [1.0088034]] 

After 4000 training steps,w1 is 
[[1.1327323]
 [1.0400873]] 

After 4500 training steps,w1 is 
[[1.1231124]
 [1.0713713]] 

After 5000 training steps,w1 is 
[[1.1134924]
 [1.102655 ]] 

After 5500 training steps,w1 is 
[[1.1625059]
 [1.1452322]] 

After 6000 training steps,w1 is 
[[1.1528856]
 [1.1765157]] 

After 6500 training steps,w1 is 
[[1.1224473]
 [1.0313292]] 

After 7000 training steps,w1 is 
[[1.1128272]
 [1.0626129]] 

After 7500 training steps,w1 is 
[[1.1618406]
 [1.1051899]] 

After 8000 training steps,w1 is 
[[1.1522204]
 [1.1364735]] 

After 8500 training steps,w1 is 
[[1.1426007]
 [1.1677576]] 

After 9000 training steps,w1 is 
[[1.1707958]
 [1.0338644]] 

After 9500 training steps,w1 is 
[[1.1611758]
 [1.0651482]] 

Final w1 is:  [[1.1626338]
 [1.1191947]]

Process finished with exit code 0

# 如果酸奶成本99元, 酸奶利润1元
# 成本很高,利润很低,人们希望少预测些,生成模型系数小于1,往少了预测

最终Final w1 is:  [[0.9205433 ],[0.91864675]]

销量y=0.92x1 + 0.92x2, 模型的确在尽量往少了预测

After 0 training steps,w1 is 
[[-0.90047467]
 [ 0.65962833]] 

After 500 training steps,w1 is 
[[0.88824874]
 [0.8874245 ]] 

After 1000 training steps,w1 is 
[[0.89066035]
 [0.915727  ]] 

After 1500 training steps,w1 is 
[[0.84931767]
 [0.83809465]] 

After 2000 training steps,w1 is 
[[0.9740528 ]
 [0.91474074]] 

After 2500 training steps,w1 is 
[[0.7827651]
 [0.9156983]] 

After 3000 training steps,w1 is 
[[0.95219666]
 [0.90977126]] 

After 3500 training steps,w1 is 
[[0.8128278]
 [0.9497371]] 

After 4000 training steps,w1 is 
[[0.9340645]
 [0.9168334]] 

After 4500 training steps,w1 is 
[[0.8806711]
 [0.8687921]] 

After 5000 training steps,w1 is 
[[0.8109257]
 [0.9515211]] 

After 5500 training steps,w1 is 
[[0.93112636]
 [0.8913892 ]] 

After 6000 training steps,w1 is 
[[0.8895594]
 [0.9167356]] 

After 6500 training steps,w1 is 
[[0.88746077]
 [0.723654  ]] 

After 7000 training steps,w1 is 
[[0.8583137]
 [0.8373418]] 

After 7500 training steps,w1 is 
[[0.953457  ]
 [0.97058845]] 

After 8000 training steps,w1 is 
[[0.7950557 ]
 [0.94374996]] 

After 8500 training steps,w1 is 
[[0.9486869 ]
 [0.87748396]] 

After 9000 training steps,w1 is 
[[0.8042901]
 [0.7632226]] 

After 9500 training steps,w1 is 
[[0.8998995]
 [0.8250995]] 

Final w1 is:  [[0.9205433 ]
 [0.91864675]]

Process finished with exit code 0

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值