【文献阅读】Handling Variable-Dimensional Time Series with Graph Neural Networks


文章 Handling Variable-Dimensional Time Series with Graph Neural Networks 来自于AI4IoT@IJCAI’20 workshop.

问题描述

解决多元时间序列(Multivariate time series, MTS)中传感器个数或组合不固定的问题。
方法流程图

方法解析

1. 条件模块(Conditioning Module)

输入不完整的多传感器时间序列 x i ∈ R d i × T i \mathbf{x}_i\in\mathbb{R}^{d_i\times T_i} xiRdi×Ti,经过图神经网络后,输出一个条件向量,为 v S i ∈ R 1 × d s \mathbf{v}_{\mathcal{S}_i}\in\mathbb{R}^{1\times d_\mathcal{s}} vSiR1×ds,文中将 d s d_s ds大小取为 d 2 \frac{d}{2} 2d。计算过程为
v S i = max ⁡ ( { v ~ k } v k ∈ V i ) \mathbf{v}_{\mathcal{S}_i}=\max (\{\tilde{\mathbf{v}}_k\}_{{v_k}\in\mathcal{V}_i}) vSi=max({v~k}vkVi)
其中, V i \mathcal{V}_i Vi为节点集合, v ~ k \tilde\mathbf{v}_k v~k为对应节点更新后的特征向量, max ⁡ \max max为沿着维度 d s d_\mathcal{s} ds最大值操作。需要注意的是,文中的GNN运算并不改变特征维度大小,即: d s d_s ds既是sensor embedding vector的长度,也是node vector的长度,也是conditioning vector的长度。

2. 核心动态模块(Core Dynamics Module)

1) 缺失值插补

使用均值替代法将 x i t ∈ R d i \mathbf{x}_i^t\in\mathbb{R}^{d_i} xitRdi 变为 x ~ i t ∈ R d \tilde{\mathbf{x}}_i^t\in\mathbb{R}^{d} x~itRd d i < d d_i<d di<d),即:其他可用传感器数值求均值。

2) CDM

主体结构是一个循环/递归神经网络RNN,输入为固定维度的MTS,即 x i t ∈ R d \mathbf{x}_i^t\in\mathbb{R}^{d} xitRd,该输入是完整的,若有缺失值用其余可用值的均值替代,为一个常数。计算过程为
z i t = GRU ( [ x ~ i t , v S i ] , z i t − 1 ; θ G R U ) , t : 1 , … , T i \mathbf{z}^t_i=\text{GRU}([\tilde{\mathbf{x}}_i^t,\mathbf{v}_{\mathcal{S}_i}],\mathbf{z}^{t-1}_i;\theta_{GRU}),\quad t:1,\dots,T_i zit=GRU([x~it,vSi],zit1;θGRU),t:1,,Ti
式中, [ ⋅ , ⋅ ] [\cdot,\cdot] [,] 并未指明是什么操作,结合GRU的输入应当包括上一单元的hidden state 也就是 z i t − 1 \mathbf{z}^{t-1}_i zit1,和该时间步的输入 [ x ~ i t , v S i ] [\tilde{\mathbf{x}}_i^t,\mathbf{v}_{\mathcal{S}_i}] [x~it,vSi],因此 [ ⋅ , ⋅ ] [\cdot,\cdot] [,]或许是concatenate操作,因为二者的shape分别是 d d d d 2 \frac{d}{2} 2d,加法不大可能(source codes没有找到)。计算output如下
y ^ i = f o ( z i T i ; θ o ) \hat{y}_i=f_o(\mathbf{z}^{T_i}_i;\theta_o) y^i=fo(ziTi;θo)

总结

文章主要用GNN来处理变维度的多元时间序列,生成一个定长的条件向量,这个向量包含了可用传感器的信息;使用基于GRU的RNN来处理插补后的MTS,再考虑条件向量,生成结果。核心思想在于对GNN的应用。

Here is an example code for a one-dimensional convolutional wavelet neural network using PyTorch: ```python import torch import torch.nn as nn import pywt class ConvWaveletNet(nn.Module): def __init__(self, num_classes): super(ConvWaveletNet, self).__init__() self.conv1 = nn.Conv1d(1, 16, kernel_size=3, stride=1, padding=1) self.relu1 = nn.ReLU() self.pool1 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv2 = nn.Conv1d(16, 32, kernel_size=3, stride=1, padding=1) self.relu2 = nn.ReLU() self.pool2 = nn.MaxPool1d(kernel_size=2, stride=2) self.conv3 = nn.Conv1d(32, 64, kernel_size=3, stride=1, padding=1) self.relu3 = nn.ReLU() self.pool3 = nn.MaxPool1d(kernel_size=2, stride=2) self.fc1 = nn.Linear(64 * 4, 128) self.relu4 = nn.ReLU() self.fc2 = nn.Linear(128, num_classes) def forward(self, x): # Apply wavelet transform to the input signal cA, cD = pywt.dwt(x, 'db1') x = cA + cD x = torch.tensor(x).unsqueeze(0).unsqueeze(0).float() # add batch and channel dimensions # Convolutional layers x = self.conv1(x) x = self.relu1(x) x = self.pool1(x) x = self.conv2(x) x = self.relu2(x) x = self.pool2(x) x = self.conv3(x) x = self.relu3(x) x = self.pool3(x) # Fully connected layers x = x.view(-1, 64 * 4) x = self.fc1(x) x = self.relu4(x) x = self.fc2(x) return x ``` This network consists of three convolutional layers followed by two fully connected layers. The input signal is first transformed using the discrete wavelet transform, and then passed through the convolutional layers. The output of the last convolutional layer is flattened and passed through the fully connected layers to produce the final classification result. Note that this implementation uses the 'db1' wavelet for the wavelet transform, but other wavelets can also be used.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值