机器学习实验(十一):基于WiFi fingerprints用自编码器(Autoencoders)和神经网络(Neural Network)进行定位_2(keras版)


声明:版权所有,转载请联系作者并注明出处  http://blog.csdn.net/u013719780?viewmode=contents


上一个实验机器学习实验(十):基于WiFi fingerprints用自编码器(Autoencoders)和神经网络(Neural Network)进行定位_1(tensorflow版)是用tensorflow实现的,本次用keras实现一下,具体代码如下:



import pandas as pd
import numpy as np
from sklearn.preprocessing import scale

dataset = pd.read_csv("trainingData.csv",header = 0)
features = scale(np.asarray(dataset.ix[:,0:520]))
labels = np.asarray(dataset["BUILDINGID"].map(str) + dataset["FLOOR"].map(str))
labels = np.asarray(pd.get_dummies(labels))

train_val_split = np.random.rand(len(features)) < 0.70
train_x = features[train_val_split]
train_y = labels[train_val_split]
val_x = features[~train_val_split]
val_y = labels[~train_val_split]

test_dataset = pd.read_csv("validationData.csv",header = 0)
test_features = scale(np.asarray(test_dataset.ix[:,0:520]))
test_labels = np.asarray(test_dataset["BUILDINGID"].map(str) + test_dataset["FLOOR"].map(str))
test_labels = np.asarray(pd.get_dummies(test_labels))
/Applications/anaconda/lib/python2.7/site-packages/sklearn/utils/validation.py:420: DataConversionWarning: Data with input dtype int64 was converted to float64 by the scale function.
  warnings.warn(msg, DataConversionWarning)
/Applications/anaconda/lib/python2.7/site-packages/sklearn/utils/validation.py:420: DataConversionWarning: Data with input dtype int64 was converted to float64 by the scale function.
  warnings.warn(msg, DataConversionWarning)
 
    

from keras.models import Sequential
from keras.layers import Dense
import time

nb_epochs = 20
batch_size = 10
input_size = 520
num_classes = 13


def encoder():
    model = Sequential()
    model.add(Dense(256, input_dim=input_size, activation='tanh', bias=True))
    model.add(Dense(128, activation='tanh', bias=True))
    model.add(Dense(64, activation='tanh', bias=True))
    return model



def decoder(e):   
    e.add(Dense(128, input_dim=64, activation='tanh', bias=True))
    e.add(Dense(256, activation='tanh', bias=True))
    e.add(Dense(input_size, activation='tanh', bias=True))
    e.compile(optimizer='adam', loss='mse')
    return e
    



e = encoder()

d = decoder(e)

d.fit(train_x, train_x, nb_epoch=nb_epochs, batch_size=batch_size, verbose=2)
time.sleep(0.1)

def classifier(d):
    num_to_remove = 3
    for i in range(num_to_remove):
        d.pop()
    d.add(Dense(128, input_dim=64, activation='tanh', bias=True))
    d.add(Dense(128, activation='tanh', bias=True))
    d.add(Dense(num_classes, activation='softmax', bias=True))
    d.compile(optimizer='adam', loss='categorical_crossentropy',metrics=['accuracy'])
    return d


c = classifier(d)


c.fit(train_x, train_y, validation_data=(val_x, val_y), nb_epoch=nb_epochs, batch_size=batch_size, verbose=2)

time.sleep(0.1)

loss, acc = c.evaluate(test_features, test_labels, verbose=0)
time.sleep(0.1)

print loss, acc
Using Theano backend.
Epoch 1/20
6s - loss: 0.7049
Epoch 2/20
6s - loss: 0.6808
Epoch 3/20
5s - loss: 0.6752
Epoch 4/20
5s - loss: 0.6724
Epoch 5/20
5s - loss: 0.6703
Epoch 6/20
5s - loss: 0.6685
Epoch 7/20
5s - loss: 0.6670
Epoch 8/20
5s - loss: 0.6656
Epoch 9/20
5s - loss: 0.6641
Epoch 10/20
5s - loss: 0.6630
Epoch 11/20
5s - loss: 0.6619
Epoch 12/20
5s - loss: 0.6610
Epoch 13/20
5s - loss: 0.6599
Epoch 14/20
5s - loss: 0.6593
Epoch 15/20
5s - loss: 0.6584
Epoch 16/20
5s - loss: 0.6578
Epoch 17/20
5s - loss: 0.6571
Epoch 18/20
5s - loss: 0.6565
Epoch 19/20
5s - loss: 0.6560
Epoch 20/20
5s - loss: 0.6555
Train on 13945 samples, validate on 5992 samples
Epoch 1/20
3s - loss: 0.3205 - acc: 0.8881 - val_loss: 0.1862 - val_acc: 0.9356
Epoch 2/20
3s - loss: 0.1333 - acc: 0.9558 - val_loss: 0.1674 - val_acc: 0.9513
Epoch 3/20
4s - loss: 0.1072 - acc: 0.9645 - val_loss: 0.1554 - val_acc: 0.9449
Epoch 4/20
3s - loss: 0.0860 - acc: 0.9717 - val_loss: 0.1836 - val_acc: 0.9383
Epoch 5/20
3s - loss: 0.0752 - acc: 0.9750 - val_loss: 0.1699 - val_acc: 0.9534
Epoch 6/20
3s - loss: 0.0691 - acc: 0.9770 - val_loss: 0.1610 - val_acc: 0.9554
Epoch 7/20
3s - loss: 0.0637 - acc: 0.9796 - val_loss: 0.1886 - val_acc: 0.9489
Epoch 8/20
4s - loss: 0.0601 - acc: 0.9814 - val_loss: 0.1604 - val_acc: 0.9569
Epoch 9/20
4s - loss: 0.0589 - acc: 0.9812 - val_loss: 0.1312 - val_acc: 0.9606
Epoch 10/20
4s - loss: 0.0496 - acc: 0.9826 - val_loss: 0.1882 - val_acc: 0.9488
Epoch 11/20
4s - loss: 0.0489 - acc: 0.9823 - val_loss: 0.1662 - val_acc: 0.9541
Epoch 12/20
3s - loss: 0.0517 - acc: 0.9833 - val_loss: 0.1311 - val_acc: 0.9613
Epoch 13/20
3s - loss: 0.0470 - acc: 0.9841 - val_loss: 0.2039 - val_acc: 0.9474
Epoch 14/20
4s - loss: 0.0495 - acc: 0.9835 - val_loss: 0.1874 - val_acc: 0.9543
Epoch 15/20
3s - loss: 0.0401 - acc: 0.9872 - val_loss: 0.1639 - val_acc: 0.9503
Epoch 16/20
4s - loss: 0.0466 - acc: 0.9857 - val_loss: 0.1649 - val_acc: 0.9593
Epoch 17/20
3s - loss: 0.0423 - acc: 0.9859 - val_loss: 0.1639 - val_acc: 0.9574
Epoch 18/20
3s - loss: 0.0369 - acc: 0.9875 - val_loss: 0.1536 - val_acc: 0.9608
Epoch 19/20
3s - loss: 0.0416 - acc: 0.9863 - val_loss: 0.1624 - val_acc: 0.9619
Epoch 20/20
3s - loss: 0.0362 - acc: 0.9882 - val_loss: 0.1545 - val_acc: 0.9609
1.25824190562 0.747974797694

注意:调用fit方法的时候出现错误 ValueError: I/O operation on closed file ,这应该是ipython notebook的一个IO bug,解决办法也很简单,将参数verbose = 0或2即可,verbose的默认值是1。verbose是指日志显示,0为不在标准输出流输出日志信息,1为输出进度条记录,2为每个epoch输出一行记录。


  • 2
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值