大数据算法实验一损失函数

1、实验目的和要求

(1)使用mnist数据集构建神经网络(PPT提供,不使用pickle文件,随机给定权重参数初值)

(2)损失函数:使用两种损失函数定义输出随机给定手写数据样本(测试集上)的误差损失对比情况

2、实验目的和要求

Python3.9

Pycharm2022

        Window10

# coding: utf-8
import sys, os
sys.path.append(os.pardir)  # 为了导入父目录的文件而进行的设定
import numpy as np
from dataset.mnist import load_mnist
from common.functions import sigmoid, softmax, identity_function

(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, flatten=True, one_hot_label=True)


#均方误差
def mean_squared_error(y,t):
    return 0.5*np.sum((y-t)**2)
#交叉熵误差
def meancross_entropy_error0(y, t):
    # 如果输入数据是一维的,即单个数据,则需要确保把y,t变为行向量而非列向量
    # 确保后面计算batch_size为1
    if y.ndim == 1:
        t = t.reshape(1, t.size)
        y = y.reshape(1, y.size)
    batch_size = y.shape[0] # y的行数
    return -np.sum(t * np.log(y + 1e-7)) / batch_size

#随机给定权重参数初值w
def init_w(w,t1,t2):
    for i in range(0, t1):
        t = []
        for j in range(0, t2):
            t.append(np.random.random())
        w.append(t)
    w1 = np.array(w)
    return w1

#随机给定偏置初值b
def init_b(b,t):
    for i in range(0, t):
        b.append(np.random.random())
    b1 = np.array(b)
    return b1

#神经网络推理
def init_network():
    zi = np.random.seed(2)
    t1, t2, t3, t4 = 784, 50, 100, 10
    w1=[]
    w1 =init_w(w1,t1,t2)
    w2 = []
    w2 = init_w(w2, t2, t3)
    w3 = []
    w3 = init_w(w3, t3, t4)
    b1 = []
    b1=init_b(b1,t2)
    b2 = []
    b2=init_b(b2,t3)
    b3 = []
    b3=init_b(b3,t4)
    network = {}
    network['W1'] = w1
    network['b1'] = b1
    network['W2'] = w2
    network['b2'] = b2
    network['W3'] = w3
    network['b3'] = b3

    return network

def forward(network, x):
    W1, W2, W3 = network['W1'], network['W2'], network['W3']
    b1, b2, b3 = network['b1'], network['b2'], network['b3']

    a1 = np.dot(x, W1) + b1
    z1 = sigmoid(a1)
    a2 = np.dot(z1, W2) + b2
    z2 = sigmoid(a2)
    a3 = np.dot(z2, W3) + b3
    y = identity_function(a3)

    return y

def predict(network, x):
    W1, W2, W3 = network['W1'], network['W2'], network['W3']
    b1, b2, b3 = network['b1'], network['b2'], network['b3']
    a1 = np.dot(x, W1) + b1
    z1 = sigmoid(a1)
    a2 = np.dot(z1, W2) + b2
    z2 = sigmoid(a2)
    a3 = np.dot(z2, W3) + b3
    y = softmax(a3)

    return y

zi = np.random.seed(7)
ran=np.random.randint(0,10000)
network = init_network()
t=t_test[ran]

y = predict(network, x_test[ran])
print(t)
print(np.argmax(y))
r1=mean_squared_error(y,t)
print(r1)
r2=meancross_entropy_error0(y,t)
print(r2)









评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值