Apriori算法、时序模式

一、实验目的与要求

掌握常见关联规则、时序摸式方法

二、实验任务

1.理解常用关联规则算法。

2.掌握Apriori算法。

3.了解时序模式,掌握时间序列的常用算法。

三、预习与准备

1. python基本语法。

2. 熟悉pycharm的开发环境。

四、实验内容

1.实现Apriori算法调用代码。

2.实现ARIMA模型代码。

3.实现离散点检测代码。

五、实验过程

1.实现Apriori算法调用代码:

#-*- coding: utf-8 -*-
from __future__ import print_function
import pandas as pd
from Apriori import *
inputfile='E://PycharmProjects/data/menu_orders.xls'
outputfile='E://PycharmProjects/tmp/apriori_rules.xls'
data=pd.read_excel(inputfile,header=None)

print(u'\n转换原始数据至0-1矩阵...')
ct=lambda x:pd.Series(1,index=x[pd.notnull(x)])

b=map(ct,data.as_matrix())
data=pd.DataFrame(list(b)).fillna(0)
print(u'转换完毕。')
del b
support=0.2
confidence=0.5
ms='---'
find_rule(data,support,confidence,ms).to_excel(outputfile)

Ariori.py

# -*- coding: utf-8 -*-
from __future__ import print_function
import pandas as pd
def connect_string(x, ms):
    x = list(map(lambda i: sorted(i.split(ms)), x))
    l = len(x[0])
    r = []
    for i in range(len(x)):
        for j in range(i, len(x)):
            if x[i][:l - 1] == x[j][:l - 1] and x[i][l - 1] != x[j][l - 1]:
                r.append(x[i][:l - 1] + sorted([x[j][l - 1], x[i][l - 1]]))
    return r
def find_rule(d, support, confidence, ms=u'--'):
    result = pd.DataFrame(index=['support', 'confidence'])  # 定义输出结果
    support_series = 1.0 * d.sum() / len(d)  # 支持度序列
    column = list(support_series[support_series > support].index)  
    k = 0
    while len(column) > 1:
        k = k + 1
        print(u'\n正在进行第%s次搜索...' % k)
        column = connect_string(column, ms)
        print(u'数目:%s...' % len(column))
        sf = lambda i: d[i].prod(axis=1, numeric_only=True)
        d_2 = pd.DataFrame(list(map(sf, column)), index=[ms.join(i)

for i in column]).T
        support_series_2 = 1.0 * d_2[[ms.join(i) for i in column]].sum() /len(d)  
        column = list(support_series_2[support_series_2 > support].index) 
        support_series = support_series.append(support_series_2)
        column2 = []
        for i in column:  
            i = i.split(ms)
            for j in range(len(i)):
                column2.append(i[:j] + i[j + 1:] + i[j:j + 1])
        cofidence_series = pd.Series(index=[ms.join(i) for i in column2])
        for i in column2:  # 计算置信度序列
            cofidence_series[ms.join(i)] = support_series[ms.join(sorted(i))] / support_series[ms.join(i[:len(i) - 1])]
        for i in cofidence_series[cofidence_series > confidence].index:
            result[i] = 0.0

            result[i]['confidence'] = cofidence_series[i]

    result[i]['support'] = support_series[ms.join(sorted(i.split(ms)))]
result = result.T.sort_values(['confidence', 'support'], ascending=False)  
print(u'\n结果为:')
print(result)


2. 实现ARIMA模型代码:

 

#-*- coding: utf-8 -*-
import pandas as pd
discfile='E://PycharmProjects/data/arima_data.xls'
forecastnum=5
data=pd.read_excel(discfile,index_col=u'日期')
import matplotlib.pyplot as plt
plt.rcParams['font.sans-serif']=['SimHei']
plt.rcParams['axes.unicode_minus']=False
data.plot()
plt.show()
from statsmodels.graphics.tsaplots import plot_acf
plot_acf(data).show()
from statsmodels.tsa.stattools import adfuller as ADF
print(u'原始序列的ADF检验结果为:',ADF(data[u'销量']))
D_data=data.diff().dropna()
D_data=data.columns=[u'销量']
D_data.plot()
plt.show()
plot_acf(D_data).show()
from statsmodels.graphics.tsaplots import plot_pacf
plot_pacf(D_data).show()
print(u'差分序列的ADF检验结果为:',ADF(D_data[u'销量']))
from statsmodels.stats.diagnostic import acorr_ljungbox
print(u'差分序列的白噪声检验结果为:',acorr_ljungbox(D_data,lags=1))
from statsmodels.tsa.arima_model import ARIMA

pmax=int(len(D_data)/10)
qmax=int(len(D_data)/10)
bic_matrix=[]

for p in range(pmax+1):
    tmp=[]
    for q in range(qmax+1):
        try:
            tmp.append(ARIMA(data,(p,1,q)).fit().bic)
        except:
            tmp.append(None)
    bic_matrix.append(tmp)

3.实现离散点检测代码:

 

# -*- coding: utf-8 -*-
import numpy as np
import pandas as pd
if __name__ == '__main__':
    inputfile = 'E://PycharmProjects/data/consumption_data.xls'
    k = 3
    threshold = 2
    iteration = 500
    data = pd.read_excel(inputfile, index_col='Id')
    data_zs = 1.0 * (data - data.mean()) / data.std()
    from sklearn.cluster import KMeans
    model = KMeans(n_clusters=k, n_jobs=4, max_iter=iteration)
    model.fit(data_zs)
    r = pd.concat([data_zs, pd.Series(model.labels_, index=data.index)], axis=1)
    r.columns = list(data.columns) + [u'聚类类别']
    norm = []
    for i in range(k):
        norm_tmp = r[['R', 'F', 'M']][r[u'聚类类别'] == i] - model.cluster_centers_[i]
        norm_tmp = norm_tmp.apply(np.linalg.norm, axis=1)
        norm.append(norm_tmp / norm_tmp.median())
    norm = pd.concat(norm)
    import matplotlib.pyplot as plt
    plt.rcParams['font.sans-serif'] = ['SimHei']
    plt.rcParams['axes.unicode_minus'] = False
    
norm[norm <= threshold].plot(style='go')
    discrete_points = norm[norm > threshold]
    discrete_points.plot(style='ro')
    for i in range(len(discrete_points)):
        id = discrete_points.index[i]
        n = discrete_points.iloc[i]
        plt.annotate('(%s, %0.2f)' % (id, n), xy=(id, n), xytext=(id, n))
    plt.xlabel(u'编号')
    plt.ylabel(u'相对距离')
    plt.show()

实验效果

1. 实现Apriori算法调用代码:

 

2. 实现ARIMA模型代码:

 

3.实现离散点检测代码:

 

 

六、实验总结与体会

通过这次的实验,我学会了Python中的挖掘建模的基本操作,通过pandas来读取Excel表格文件中的数据,通过outputfile来指定输出数据路径,然后通过data.to_excel(outputfile)输出结果,写入文件,通过matplotlib.pyplot来导入图像库,从而生成图像,更加直观的表达数据分析结果,通过这次的实验操作,让我对Python语言有了更深刻的认识,语言的简单明了,当时功能却很强大,在今后的学习当中,我一定要上课认真听老师讲解,仔细研读代码,分析代码,然后自己动手敲写代码,及时发现错误,自己尝试解决错误,实在无法解决的问题,要积极与老师和同学交流探讨,只有这样,我们才能更好的学习这门课程。


 

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值