Coursera 吴恩达Deep Learning 第二周作业逻辑回归 Matlab 实现

由于不太熟悉python,把python的代码复制跑了一遍之后打算用Matlab 写一遍。
原来的python的参照GitHub:https://github.com/Kulbear/deep-learning-coursera/blob/master/Neural Networks and Deep Learning/Logistic Regression with a Neural Network mindset.ipynb

所用的符号基本一致,原理不多赘述。

素材来源,在百度里面搜索了尺寸64✖64的猫的图片(不会爬取网页图片,直接下载了整个网页)。得到1700多张图,手动删掉了500张看上去不像猫的图片之后,还剩1200多张。打算用800张用于训练,剩余的混合其他不是猫的图片一起做测试。(其实剩余的1200张里面还有一些不是猫的图片)。
测试图片
(一眼就看到了一条狗

1 从文件夹读取图片和预处理

找了很多种批量读取文件夹图片的方法,这种刚好和windows批量命名符合,但是再也找不到原博客了。

有的图片用imread()读入只是一个64×64的数组,导致出错,读到一半就终止。if用于判断读入的图片是不是64×64。800张图只读了784张。

files = dir(fullfile('F:\cat\cat_train\','*.jpg'));
lengthFiles = length(files);
train_set_x_o=[];
train_set_x_flatten=[];
for k = 1:200
    Img = imread(strcat('F:\cat\cat_train\',files(k).name));%文件所在路径
    if size(Img,3)==3
       train_set_x_o=[train_set_x_o;Img];
       Img = reshape(Img,size(Img,1)*size(Img,2)*size(Img,3) ,1);
    else 
        continue
    end
       train_set_x_flatten=[train_set_x_flatten,Img];
end
train_set_x=im2double(train_set_x_flatten)/255;
train_set_y=ones(1,size(train_set_x,2));

train_set_x_o用来保存原来的RGB数组,依旧是个三维数组,顺编写了一个查看图片的函数。

function [] = cat_display(train_set_x_o,k)
%cat_dislay 
%   display the photo
    imshow(train_set_x_o(64*(k-1)+1:64*k,:,:))
end

![cat_display(train_set_x_o,15)]%查看第十五张图片

查看第十五张图片
由于比较懒直接复制了一遍修改路径,把用于测试的图片也读入,写的比较难看,下面是完整的读入的代码

%Read from file and reshape

%Read train datas
files = dir(fullfile('F:\Users\949546740\Desktop\cat\cat_train\','*.jpg'));
lengthFiles = length(files);
train_set_x_o_1=[];
train_set_x_o_2=[];
train_set_x_flatten_1=[];
train_set_x_flatten_2=[];
for k = 1:80
    Img = imread(strcat('F:\Users\949546740\Desktop\cat\cat_train\',files(k).name));%文件所在路径
    if size(Img,3)==3
       train_set_x_o_1=[train_set_x_o_1;Img];
       Img = reshape(Img,size(Img,1)*size(Img,2)*size(Img,3) ,1);
    else 
        continue
    end
       train_set_x_flatten_1=[train_set_x_flatten_1,Img];
end
train_set_x_1=im2double(train_set_x_flatten_1)/255;
train_set_y_1=ones(1,size(train_set_x_1,2));




files = dir(fullfile('F:\Users\949546740\Desktop\cat\non_cat_train\','*.jpg'));
lengthFiles = length(files);
for k = 1:lengthFiles
    Img = imread(strcat('F:\Users\949546740\Desktop\cat\non_cat_train\',files(k).name));%文件所在路径
    if size(Img,3)==3
       train_set_x_o_2=[train_set_x_o_2;Img];
       Img = reshape(Img,size(Img,1)*size(Img,2)*size(Img,3) ,1);
    else 
        continue
    end
       train_set_x_flatten_2=[train_set_x_flatten_2,Img];
end
train_set_x_2=im2double(train_set_x_flatten_2)/255;
train_set_y_2=zeros(1,size(train_set_x_2,2));

train_set_x_o=[train_set_x_o_1;train_set_x_o_2];
train_set_x=[train_set_x_1,train_set_x_2];
train_set_y=[train_set_y_1,train_set_y_2];

%Read test data
files = dir(fullfile('F:\Users\949546740\Desktop\cat\cat_test\','*.jpg'));
lengthFiles = length(files);
test_set_x_o_1=[];
test_set_x_o_2=[];
test_set_x_flatten_1=[];
test_set_x_flatten_2=[];
for k = 1:20
    Img = imread(strcat('F:\Users\949546740\Desktop\cat\cat_test\',files(k).name));%文件所在路径
    if size(Img,3)==3
       test_set_x_o_1=[test_set_x_o_1;Img];
       Img = reshape(Img,size(Img,1)*size(Img,2)*size(Img,3) ,1);
    else 
        continue
    end
       test_set_x_flatten_1=[test_set_x_flatten_1,Img];
end
test_set_x_1=im2double(test_set_x_flatten_1)/255;
test_set_y_1=ones(1,size(test_set_x_1,2));

files = dir(fullfile('F:\Users\949546740\Desktop\cat\non_cat_test\','*.jpg'));
lengthFiles = length(files);
for k = 1:40
    Img = imread(strcat('F:\Users\949546740\Desktop\cat\non_cat_test\',files(k).name));%文件所在路径
    if size(Img,3)==3
       test_set_x_o_2=[test_set_x_o_2;Img];
       Img = reshape(Img,size(Img,1)*size(Img,2)*size(Img,3) ,1);
    else 
        continue
    end
       test_set_x_flatten_2=[test_set_x_flatten_2,Img];
end
test_set_x_2=im2double(test_set_x_flatten_2)/255;
test_set_y_2=zeros(1,size(test_set_x_2,2));

test_set_x_o=[test_set_x_o_1;test_set_x_o_2];
test_set_x=[test_set_x_1,test_set_x_2];
test_set_y=[test_set_y_1,test_set_y_2];

2 算法的各个部分

2-1 Helper Functions

sigmoid 函数
function S = sigmoid(Z)
%sigmoid
%Compute the sigmoid of Z
S=1./(1+exp(-Z));
end
初始化参数

用于初始化w和b

function [w,b] = initialize_with_zeros(dim)
%initialize_with_zeros
w=zeros(dim,1);
b=0;
edn
正向传播和反向传播
function [dw,db,cost] = propagate(w,b,X,Y)
%propagate
%propagation forward
m=size(X,2);
A=sigmoid(w'*X+b);
cost=(-1/m)*sum(Y.*log(A)+(1-Y).*log(1-A));
%propagation backward
dw=(1/m)*X*(A-Y)';
db=(1/m)*sum(A-Y);
end
梯度下降
function [w,b,dw,db,costs] = optimize(w,b,X,Y,num_iterations,learningrate)
costs=[];
for k=1:num_iterations
    [dw,db,cost]=propagate(w,b,X,Y);
    w=w-learningrate*dw;
    b=b-learningrate*db;
    if mod(k,100)==0
        costs=[costs,cost];
        disp(strcat('Cost after iterations',32,num2str(k),32,'is',32,num2str(cost)));
    end
end
end

以上就完成了所需函数的编写

3 将已有的函数组合成一个模型

function [costs,w,b,Y_prediction_test] = model(X_train,Y_train, X_test, Y_test,num_iterations,learningrate)
%model
[w,b]=initialize_with_zeros(size(X_train,1));
[w,b,dw,db,costs]=optimize(w,b,X_train,Y_train,num_iterations,learningrate);
 Y_prediction_test = predict(w, b, X_test);
 Y_prediction_train = predict(w, b, X_train);
 %Print Errors
 disp(strcat('test accuracy:',num2str(1-mean(abs(Y_prediction_test-Y_test)))));
 disp(strcat('train accuracy:',num2str(1-mean(abs(Y_prediction_train-Y_train)))));
 
end

编写一个main,运行read读取数据之后就可以直接运行model了

read
[costs,w,b,Y_predition_test] = model(train_set_x,train_set_y,test_set_x,test_set_y,2000,0.005);
plot(costs)

最后只选取了一小部分做测试,训练集79猫,219非猫;测试集20猫,40非猫。
尝试了不同学习率和迭代次数。
迭代2000次,学习率为5
Cost after iterations 100 is 0.62565
Cost after iterations 200 is 0.60555
Cost after iterations 300 is 0.59197
Cost after iterations 400 is 0.58167
Cost after iterations 500 is 0.57322
Cost after iterations 600 is 0.56596
Cost after iterations 700 is 0.55952
Cost after iterations 800 is 0.55371
Cost after iterations 900 is 0.54839
Cost after iterations 1000 is 0.54347
Cost after iterations 1100 is 0.53889
Cost after iterations 1200 is 0.53459
Cost after iterations 1300 is 0.53053
Cost after iterations 1400 is 0.52667
Cost after iterations 1500 is 0.523
Cost after iterations 1600 is 0.51949
Cost after iterations 1700 is 0.51612
Cost after iterations 1800 is 0.51288
Cost after iterations 1900 is 0.50975
Cost after iterations 2000 is 0.50672
test accuracy:0.56667
train accuracy:0.75
cost函数图像
在这里插入图片描述

以上结果貌似算不错了,其他更糟糕。即使尝试其他cost函数的值一直都很大。
迭代次数1000次,学习率5,cost比较小,但是测试集的正确率只有一半。
Cost after iterations 10000 is 0.37341
test accuracy:0.5
train accuracy:0.85648

迭代2000次,学习率0.5
Cost after iterations 2000 is 0.6054
test accuracy:0.63333
train accuracy:0.63889

总的来说以上结果都挺糟糕的。模型本身简陋,训练集也存在一些问题。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值