图神经网络总结+比赛

课程总结

我是一名做计算机视觉的学生,在外面实习的时候,做了3D视觉的项目,其中涉及3D目标检测与分割,所以报名了百度的图神经网络课程。图神经网络针对计算机视觉中3D点云的分割很有效果。以下是课程学习的内容总结。

在这里插入图片描述

关于比赛的心得

在比赛中,我复现了图神经网络的结构,包括GCN,GAT RESGAT这几种,在阅读源码的时候我看到ResGAT之后很有感触,这是GAT的模型融合了CV领域的RESnet算法,即加入Short-cut进行直连的结构。整体算法代码如下:
// An highlighted block
class ResGAT(object):
    """Implement of ResGAT"""

    def __init__(self, config, num_class):
        self.num_class = num_class
        self.num_layers = config.get("num_layers", 1)
        self.num_heads = config.get("num_heads", 8)
        self.hidden_size = config.get("hidden_size", 8)
        self.feat_dropout = config.get("feat_drop", 0.6)
        self.attn_dropout = config.get("attn_drop", 0.6)
        self.edge_dropout = config.get("edge_dropout", 0.0)

    def forward(self, graph_wrapper, feature, phase):
        # feature [num_nodes, 100]
        if phase == "train":
            edge_dropout = self.edge_dropout
        else:
            edge_dropout = 0
        feature = L.fc(feature, size=self.hidden_size * self.num_heads, name="init_feature")
        for i in range(self.num_layers):
            ngw = pgl.sample.edge_drop(graph_wrapper, edge_dropout)

            res_feature = feature
            # res_feature [num_nodes, hidden_size * n_heads]
            feature = conv.gat(ngw,
                               feature,
                               self.hidden_size,
                               activation=None,
                               name="gat_layer_%s" % i,
                               num_heads=self.num_heads,
                               feat_drop=self.feat_dropout,
                               attn_drop=self.attn_dropout)
            # feature [num_nodes, num_heads * hidden_size]
            feature = res_feature + feature
            # [num_nodes, num_heads * hidden_size] + [ num_nodes, hidden_size * n_heads]
            feature = L.relu(feature)
            feature = L.layer_norm(feature, name="ln_%s" % i)

        ngw = pgl.sample.edge_drop(graph_wrapper, edge_dropout)
        feature = conv.gat(ngw,
                           feature,
                           self.num_class,
                           num_heads=1,
                           activation=None,
                           feat_drop=self.feat_dropout,
                           attn_drop=self.attn_dropout,
                           name="output")
        return feature
可以看到在上面的代码中,feature = res_feature + feature表示short-cut的结构
// An highlighted block
feature = res_feature + feature
受到启发,我在对其中的特征展开链接,在原来单一尺度单路的特=特征中加入多尺度特征,这也借鉴了CV中Inception-net加宽网络的思想,具体代码如下:concat_feature表示三个并行支路的特征融合。
// An highlighted block
		for i in range(self.num_layers):
            ngw = pgl.sample.edge_drop(graph_wrapper, edge_dropout)

            res_feature = feature
            res_feature = L.fc(res_feature, size=self.hidden_size * self.num_heads, name="third_feature")

            # res_feature [num_nodes, hidden_size * n_heads]
            feature1 = conv.gat(ngw,
                                feature,
                                self.hidden_size,
                                activation=None,
                                name="gat_layer_%s" % i,
                                num_heads=self.num_heads,
                                feat_drop=self.feat_dropout,
                                attn_drop=self.attn_dropout)
            # feature [num_nodes, num_heads * hidden_size]
            feature2 = conv.gat(ngw,
                                feature,
                                self.hidden_size,
                                activation=None,
                                name="gat_layer_%s" % i,
                                num_heads=self.num_heads,
                                feat_drop=self.feat_dropout,
                                attn_drop=self.attn_dropout)

            feature3 = conv.gat(ngw,
                                feature,
                                self.hidden_size,
                                activation=None,
                                name="gat_layer_%s" % i,
                                num_heads=self.num_heads,
                                feat_drop=self.feat_dropout,
                                attn_drop=self.attn_dropout)

            concat_feature = feature1 + feature2 + feature3
            feature = (res_feature + concat_feature) + 0.8*res_feature*concat_feature
时间有限,先写到这里了,后续还会继续更新的
  • 0
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值