NILM非侵入式负荷监测
第四章 NILMTK程序之RNN、DAE
文章目录
Kelly和Knottenbelt于2015年首次将深度学习算法引入NILM领域,主要介绍了RNN与DAE算法。
其中,RNN算法的结构如下:
1. Input (length determined by appliance duration)
2. 1D conv (filter size=4, stride=1, number of filters=16, activation function=linear, border mode=same)
3. bidirectional LSTM (N=128, with peepholes)
4. bidirectional LSTM (N=256, with peepholes)
5. Fully connected (N=128, activation function=TanH)
6. Fully connected (N=1, activation function=linear)
nilmtk_contrib为NILMTK的辅助包,自带了部分深度学习算法,安装方法类似于NILMTK,可以看到nilmtk_contrib中自带的RNN算法如下:
def return_network(self):
'''Creates the RNN module described in the paper
'''
model = Sequential()
# 1D Conv
model.add(Conv1D(16,4,activation="linear",input_shape=(self.sequence_length,1),padding="same",strides=1))
# Bi-directional LSTMs
model.add(Bidirectional(LSTM(128,return_sequences=True,stateful=False),merge_mode='concat'))
model.add(Bidirectional(LSTM(256,return_sequences=False,stateful=False),merge_mode='concat'))
# Fully Connected Layers
model.add(Dense(128, activation='tanh'))
model.add(Dense(1, activation='linear'))
model.compile(loss='mse', optimizer='adam', metrics=['mse'])
return model
DAE算法的结构如下:
1. Input (length determined by appliance duration)
2. 1D conv (filter size=4, stride=1, number of filters=8, activation function=linear, border mode=valid)
3. Fully connected (N=(sequence length - 3) × 8, activation function=ReLU)
4. Fully connected (N=128; activation function=ReLU)
5. Fully connected (N=(sequence length - 3) × 8, activation function=ReLU)
6. 1D conv (filter size=4, stride=1, number of filters=1, activation function=linear, border mode=valid)
可以看到nilmtk_contrib中自带的DAE算法如下:
def return_network(self):
model = Sequential()
model.add(Conv1D(8, 4, activation="linear", input_shape=(self.sequence_length, 1), padding="same", strides=1))
model.add(Flatten())
model.add(Dense((self.sequence_length)*8, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense((self.sequence_length)*8, activation='relu'))
model.add(Reshape(((self.sequence_length), 8)))
model.add(Conv1D(1, 4, activation="linear", padding="same", strides=1))
model.compile(loss='mse', optimizer='adam')
return model
运行程序:
from nilmtk.api import API
from nilmtk.disaggregate import CO,Mean,FHMMExact
from nilmtk_contrib.disaggregate import DAE, RNN
REDD1 = {
'power': {'mains': ['apparent'],'appliance': ['active']},
'sample_rate':60,
'appliances': ['fridge','light'],
'artificial_aggregate': True,
'methods': {
'CO': CO({}),
'Mean': Mean({}),
'FHMMExact': FHMMExact({'num_of_states':3}),
'RNN': RNN({'n_epochs':2,'batch_size':32}),
'DAE': DAE({'n_epochs':2,'batch_size':32})
},
'train': {
'datasets': {
'REDD': {
'path': 'D:/data/redd.h5',
'buildings': {
1: {
'start_time': '2011-04-19',
'end_time': '2011-04-25'
}
}
}
}
},
'test': {
'datasets': {
'REDD': {
'path': 'D:/data/redd.h5',
'buildings': {
1: {
'start_time': '2011-05-01',
'end_time': '2011-05-02'
}
}
}
},
'metrics':['rmse','f1score']
}
}
api_results_experiment_1 = API(REDD1)
errors_keys = api_results_experiment_1.errors_keys
errors = api_results_experiment_1.errors
list_mean_result=[err.mean() for err in errors]
ps_rmse=list_mean_result[0]
ps_f1=list_mean_result[1]
结果如下:
参考资料
[1] Kelly J , Knottenbelt W .Neural NILM: Deep Neural Networks Applied to Energy Disaggregation[J].ACM, 2015.
公众号
欢迎收藏、点赞和转发,你的阅读是我前进的动力!