吴恩达老师的深度学习单层神经网络实现

吴恩达老师的深度学习单层神经网络实现

数据集下载:

train_catvnoncat.h5(训练集),test_catvnoncat.h5(测试集)

链接:link. 提取码:qxm6

代码(自己拿别人的改的)

第一次写这东西,我也不会插入ipynp的文件。

# To add a new cell, type '# %%'
# To add a new markdown cell, type '# %% [markdown]'
# %%

import numpy as np
import h5py
    
#加载数据集
train_dataset = h5py.File('C:/Users/Fanta/Downloads/train_catvnoncat.h5', "r")
train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels

test_dataset = h5py.File('C:/Users/Fanta/Downloads/test_catvnoncat.h5', "r")
test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels

classes = np.array(test_dataset["list_classes"][:]) # the list of classes

train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))
    
print(train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes)


# %%
train_dataset.keys()  #相当于一个字典,key 对应value


# %%
train_set_x_orig.shape #209张64*64的RGB图片


# %%
train_set_y_orig.shape


# %%
train_set_y_orig


# %%
classes  #两个类别 一个是猫一个不是猫


# %%
#激活函数activition function
def ReLU(z):
    mz,nz = z.shape
    zero = np.zeros((mz,nz))
    return np.maximum(zero,z)
#RuntimeWarning: overflow encountered in exp
def sigmoid(z):
    return 1.0/(1+np.exp(-z))
 
#将4维数组转换为2维
def transformArray(inX):
    shape = inX.shape
    shape1 = shape[1]*shape[2]*shape[3]
    transformedX = np.mat(np.zeros((shape1,shape[0])))
    for i,item in enumerate(inX):
        transformedItem = item.flatten().reshape(-1,1)
        transformedX[:,i] = transformedItem
    return transformedX
 


# %%
train_data_x = transformArray(train_set_x_orig).T
test_data_x = transformArray(test_set_x_orig).T
train_data_y = train_set_y_orig.T
test_data_y = test_set_y_orig.T


# %%
print(train_data_x.shape,test_data_x.shape)


# %%
#初始化权重
# W = np.random.randn(train_data_x.shape[1],1)
# b = np.random.randn(1)


# %%
print(W.shape,b.shape)


# %%
'''这是一个实验'''
k = np.array([0,2,0.3,0.8,0.1])
k = (k>0.5)*1
k


# %%
#损失函数
def lossFunc(A,y_train):
    pass
    
#单层神经网络
class myOneLNN:
    def __init__(self,W,b,alpha,af,iterNum): #af 是一个激活函数 iterNum是迭代次数
        self.W = W
        self.b = b
        self.af = af
        self.iterNum = iterNum
        self.alpha = alpha
        
    def trainFunc(self,X_train,y_train):
        
        m = X_train.shape[0]
        X = X_train
        Y = y_train
        Z = X.dot(self.W) + self.b #前向传播过程
        
        for i in range(iterNum):        
            Z = X.dot(self.W) + self.b
            A = self.af(Z)  
            dZ = A - Y
            dW =X.T.dot(dZ)
            db = dZ.sum()/X.shape[0]
            self.W -= self.alpha*dW
            self.b -= self.alpha*db
        return self.W,self.b
    
    def predictFunc(self,X_test):
        y_pred = self.af(X_test.dot(self.W) + self.b)
        y_pred = 1*(y_pred>0.5)
        return y_pred
    
    def testErrors(self,X_test,y_test):
        acc = (self.predictFunc(X_test) == y_test).sum()*1.0/len(y_test)
        print("acc: {}".format(acc))


# %%
iterNum = 500
W = np.random.randn(train_data_x.shape[1],1)
b = 0
af = sigmoid
alpha = 0.001
oneLNN = myOneLNN(W,b,alpha,af,iterNum)


# %%
W1,b1 = oneLNN.trainFunc(train_data_x,train_data_y)
b1


# %%
oneLNN.predictFunc(test_data_x)


# %%
oneLNN.testErrors(test_data_x,test_data_y)


# %%



参考文献

[1]https://blog.csdn.net/zhuzuwei/article/details/77950330

  • 6
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值