【Machine Learning实验3】SoftMax regression

博主分享了在深夜成功调试并理解SoftMax回归的过程,详细介绍了试验Andrew Ng课程中提及的SoftMax回归算法。通过实验数据展示,如sample 0到sample 8的分类概率和对数似然值,证明了算法的收敛性和效果,表达了实现后的满足感。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

神奇的SoftMax regression,搞了一晚上搞不定,凌晨3点起来继续搞,刚刚终于调通。我算是彻底理解了,哈哈。代码试验了Andrew Ng的第四课上提到的SoftMax regression算法,并参考了http://ufldl.stanford.edu/wiki/index.php/Softmax_Regression

最终收敛到这个结果,巨爽。

smaple 0: 0.983690,0.004888,0.011422,likelyhood:-0.016445
smaple 1: 0.940236,0.047957,0.011807,likelyhood:-0.061625
smaple 2: 0.818187,0.001651,0.180162,likelyhood:-0.200665
smaple 3: 0.000187,0.999813,0.000000,likelyhood:-0.000187
smaple 4: 0.007913,0.992087,0.000000,likelyhood:-0.007945
smaple 5: 0.001585,0.998415,0.000000,likelyhood:-0.001587
smaple 6: 0.020159,0.000001,0.979840,likelyhood:-0.020366
smaple 7: 0.018230,0.000000,0.981770,likelyhood:-0.018398
smaple 8: 0.025072,0.000000,0.974928,likelyhood:-0.025392


#include "stdio.h"
#include "math.h"

double matrix[9][4]={
  {1,47,76,24}, //include x0=1
              {1,46,77,23},
              {1,48,74,22},
              {1,34,76,21},
              {1,35,75,24},
              {1,34,77,25},
              {1,55,76,21},
              {1,56,74,22},
              {1,55,72,22},
                };

double result[]={1,
                 1,
                 1,
                 2,
                 2,
                 2,
                 3,
                 3,
                 3,};

double theta[2][4]={
                 {0.3,0.3,0.01,0.01},
   
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值