图片文字提取之路-02数据预测

3 篇文章 0 订阅
2 篇文章 0 订阅

    数据的预测和分类是图片文字提取之路中少不了的一步,因为文字提取要对文字图片进行分类嘛。所以顺带把数据预测也学习一下,数据分类最近也有学习,但是进度不是很好,后面再继续,我们先来学习一下数据预测。

    自学是比较苦逼的,也没有人指导,所以我写出来,如果有大牛路过,希望能赐教。文章代码部分使用python.

    下面有一组数据,是关于时间(x轴)和某个页面的访问量(y轴),我们的目的是需要预测后两个月的访问量:

    ['20150301', '20150302', '20150303', '20150304', '20150305', '20150306', '20150307', '20150308', '20150309', '20150310', '20150311', '20150312', '20150313', '20150314', '20150315', '20150316', '20150317', '20150318', '20150319', '20150320', '20150321', '20150322', '20150323', '20150324', '20150325', '20150326', '20150327', '20150328', '20150329', '20150330', '20150331', '20150401', '20150402', '20150403', '20150404', '20150405', '20150406', '20150407', '20150408', '20150409', '20150410', '20150411', '20150412', '20150413', '20150414', '20150415', '20150416', '20150417', '20150418', '20150419', '20150420', '20150421', '20150422', '20150423', '20150424', '20150425', '20150426', '20150427', '20150428', '20150429', '20150430', '20150501', '20150502', '20150503', '20150504', '20150505', '20150506', '20150507', '20150508', '20150509', '20150510', '20150511', '20150512', '20150513', '20150514', '20150515', '20150516', '20150517', '20150518', '20150519', '20150520', '20150521', '20150522', '20150523', '20150524', '20150525', '20150526', '20150527', '20150528', '20150529', '20150530', '20150531', '20150601', '20150602', '20150603', '20150604', '20150605', '20150606', '20150607', '20150608', '20150609', '20150610', '20150611', '20150612', '20150613', '20150614', '20150615', '20150616', '20150617', '20150618', '20150619', '20150620', '20150621', '20150622', '20150623', '20150624', '20150625', '20150626', '20150627', '20150628', '20150629', '20150630', '20150701', '20150702', '20150703', '20150704', '20150705', '20150706', '20150707', '20150708', '20150709', '20150710', '20150711', '20150712', '20150713', '20150714', '20150715', '20150716', '20150717', '20150718', '20150719', '20150720', '20150721', '20150722', '20150723', '20150724', '20150725', '20150726', '20150727', '20150728', '20150729', '20150730', '20150731', '20150801', '20150802', '20150803', '20150804', '20150805', '20150806', '20150807', '20150808', '20150809', '20150810', '20150811', '20150812', '20150813', '20150814', '20150815', '20150816', '20150817', '20150818', '20150819', '20150820', '20150821', '20150822', '20150823', '20150824', '20150825', '20150826', '20150827', '20150828', '20150829', '20150830']
[97, 88, 91, 95, 107, 122, 117, 117, 117, 144, 94, 89, 152, 118, 115, 179, 186, 203, 151, 176, 201, 209, 157, 171, 198, 182, 170, 164, 155, 177, 144, 199, 213, 196, 151, 324, 217, 252, 168, 167, 221, 225, 127, 183, 194, 227, 159, 183, 202, 210, 211, 164, 239, 193, 216, 302, 239, 210, 267, 231, 239, 208, 210, 267, 204, 239, 240, 209, 282, 225, 215, 236, 323, 388, 459, 489, 325, 330, 277, 279, 268, 278, 290, 382, 338, 302, 209, 215, 311, 258, 240, 362, 296, 275, 285, 322, 297, 336, 308, 428, 283, 250, 300, 267, 223, 304, 262, 274, 257, 328, 258, 255, 220, 242, 254, 269, 326, 339, 243, 283, 312, 275, 240, 224, 266, 299, 309, 272, 285, 264, 296, 247, 224, 266, 256, 275, 250, 185, 271, 229, 292, 239, 241, 231, 268, 212, 187, 216, 302, 279, 321, 311, 375, 295, 254, 289, 222, 282, 256, 274, 295, 257, 275, 233, 249, 237, 241, 260, 221, 305, 227, 287, 340, 278, 245, 222, 382, 296, 307, 370, 392, 288, 272]

    首先,我们来看看这些数据的的折线图:

    

     最开始我在网上检索数据预测的算法,看到很多人简单的回答了最小二乘法,因为他简单,容易理解,所以我最先尝试用它来看看疗效。

     最小二乘法是非常容易理解的,官方定义是:最小二乘法(又称最小平方法)是一种数学优化技术。它通过最小化误差的平方和寻找数据的最佳函数匹配。利用最小二乘法可以简便地求得未知的数据,并使得这些求得的数据与实际数据之间误差的平方和为最小。最小二乘法还可用于曲线拟合。其他一些优化问题也可通过最小化能量或最大化熵用最小二乘法来表达。简单来说其实就是 求误差的最小平方和

     我们先试试最简单的直线拟合,设二维平面中的一条直线y=ax+b,b为截距。我们现在要找到一条直线,来拟合这些数据,让尽可能多的数据落在这条直线上。求a,b,使带入x后求得的y与实际的y方差最小。 推导后面补上

    

#encoding: utf-8 
import numpy as np
from scipy.optimize import leastsq
import pylab as pl

m = 2  #多项式的次数

#多项式分布的函数
def fake_func(p, x, show=0):
    f = np.poly1d(p) 
    if show == 1 :
        print(f)
    return f(x)

#残差函数
def residuals(p, y, x):
    return y - fake_func(p, x)

x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183]
y = [97, 88, 91, 95, 107, 122, 117, 117, 117, 144, 94, 89, 152, 118, 115, 179, 186, 203, 151, 176, 201, 209, 157, 171, 198, 182, 170, 164, 155, 177, 144, 199, 213, 196, 151, 324, 217, 252, 168, 167, 221, 225, 127, 183, 194, 227, 159, 183, 202, 210, 211, 164, 239, 193, 216, 302, 239, 210, 267, 231, 239, 208, 210, 267, 204, 239, 240, 209, 282, 225, 215, 236, 323, 388, 459, 489, 325, 330, 277, 279, 268, 278, 290, 382, 338, 302, 209, 215, 311, 258, 240, 362, 296, 275, 285, 322, 297, 336, 308, 428, 283, 250, 300, 267, 223, 304, 262, 274, 257, 328, 258, 255, 220, 242, 254, 269, 326, 339, 243, 283, 312, 275, 240, 224, 266, 299, 309, 272, 285, 264, 296, 247, 224, 266, 256, 275, 250, 185, 271, 229, 292, 239, 241, 231, 268, 212, 187, 216, 302, 279, 321, 311, 375, 295, 254, 289, 222, 282, 256, 274, 295, 257, 275, 233, 249, 237, 241, 260, 221, 305, 227, 287, 340, 278, 245, 222, 382, 296, 307, 370, 392, 288, 272]


p0 = np.random.randn(m)   
plsq = leastsq(residuals, p0, args=(y, x))


pl.plot(x, y, label='real data')
#计算后60天的数据
for i in range(1,61):
    x.append(183+i)
newy = fake_func(plsq[0], x, 1)
pl.plot(x, newy, label='forecast data')

pl.legend()
pl.show()

    最终,计算的方程是y = 0.79 x + 171.7,后面多出来的部分就是预测值,结果如下图所示:

    

    看着上面的图,有上升的趋势,但是误差还是比较大,我们来试试曲线拟合,用到多项式函数,但是次数越高,虽然拟合的效果越好,却越容易过拟合(为什么会出现这种情况,我猜测是和噪声数据有关吧,具体我还要去研究下???)。

   我们先用3项式来拟合试试。将上面代码的m改为3。得到函数y = -0.0114 x + 2.887 x + 107.1,最终效果图如下:

    

    效果任然不太好。我们继续学习,后面我们来尝试使用时间序列预测中的指数平滑法再来试试效果。

  


评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值