NILMTK程序之RNN、DAE

NILM非侵入式负荷监测

第四章 NILMTK程序之RNN、DAE



Kelly和Knottenbelt于2015年首次将深度学习算法引入NILM领域,主要介绍了RNN与DAE算法。
其中,RNN算法的结构如下:

1. Input (length determined by appliance duration) 
2. 1D conv (filter size=4, stride=1, number of filters=16, activation function=linear, border mode=same) 
3. bidirectional LSTM (N=128, with peepholes) 
4. bidirectional LSTM (N=256, with peepholes) 
5. Fully connected (N=128, activation function=TanH) 
6. Fully connected (N=1, activation function=linear)

nilmtk_contrib为NILMTK的辅助包,自带了部分深度学习算法,安装方法类似于NILMTK,可以看到nilmtk_contrib中自带的RNN算法如下:

    def return_network(self):
        '''Creates the RNN module described in the paper
        '''
        model = Sequential()

        # 1D Conv
        model.add(Conv1D(16,4,activation="linear",input_shape=(self.sequence_length,1),padding="same",strides=1))

        # Bi-directional LSTMs
        model.add(Bidirectional(LSTM(128,return_sequences=True,stateful=False),merge_mode='concat'))
        model.add(Bidirectional(LSTM(256,return_sequences=False,stateful=False),merge_mode='concat'))

        # Fully Connected Layers
        model.add(Dense(128, activation='tanh'))
        model.add(Dense(1, activation='linear'))

        model.compile(loss='mse', optimizer='adam', metrics=['mse'])

        return model

DAE算法的结构如下:

1. Input (length determined by appliance duration) 
2. 1D conv (filter size=4, stride=1, number of filters=8, activation function=linear, border mode=valid) 
3. Fully connected (N=(sequence length - 3) × 8, activation function=ReLU) 
4. Fully connected (N=128; activation function=ReLU) 
5. Fully connected (N=(sequence length - 3) × 8, activation function=ReLU) 
6. 1D conv (filter size=4, stride=1, number of filters=1, activation function=linear, border mode=valid)

可以看到nilmtk_contrib中自带的DAE算法如下:

    def return_network(self):
        model = Sequential()
        model.add(Conv1D(8, 4, activation="linear", input_shape=(self.sequence_length, 1), padding="same", strides=1))
        model.add(Flatten())
        model.add(Dense((self.sequence_length)*8, activation='relu'))
        model.add(Dense(128, activation='relu'))
        model.add(Dense((self.sequence_length)*8, activation='relu'))
        model.add(Reshape(((self.sequence_length), 8)))
        model.add(Conv1D(1, 4, activation="linear", padding="same", strides=1))
        model.compile(loss='mse', optimizer='adam')
        return model

运行程序:

from nilmtk.api import API
from nilmtk.disaggregate import CO,Mean,FHMMExact
from nilmtk_contrib.disaggregate import DAE, RNN


REDD1 = {
  'power': {'mains': ['apparent'],'appliance': ['active']},
  'sample_rate':60,
    'appliances': ['fridge','light'],
  'artificial_aggregate': True,
  'methods': {
        'CO': CO({}),
        'Mean': Mean({}),
        'FHMMExact': FHMMExact({'num_of_states':3}),
        'RNN': RNN({'n_epochs':2,'batch_size':32}),
        'DAE': DAE({'n_epochs':2,'batch_size':32})
      },
  'train': {    
    'datasets': {
        'REDD': {
            'path': 'D:/data/redd.h5',  
            'buildings': {
                1: {
                    'start_time': '2011-04-19',
                    'end_time': '2011-04-25'
                    }   
                }                
            }
        }
    },
                
  'test': {
    'datasets': {
        'REDD': {
            'path': 'D:/data/redd.h5',  
            'buildings': {
                1: {
                    'start_time': '2011-05-01',
                    'end_time': '2011-05-02'
                    }    
                }
            }
        },
        'metrics':['rmse','f1score']
    }
}


api_results_experiment_1 = API(REDD1)


errors_keys = api_results_experiment_1.errors_keys
errors = api_results_experiment_1.errors


list_mean_result=[err.mean() for err in errors]
ps_rmse=list_mean_result[0]
ps_f1=list_mean_result[1]

结果如下:
在这里插入图片描述

参考资料

[1] Kelly J , Knottenbelt W .Neural NILM: Deep Neural Networks Applied to Energy Disaggregation[J].ACM, 2015.

公众号

在这里插入图片描述
欢迎收藏、点赞和转发,你的阅读是我前进的动力!

  • 24
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值