论文笔记——基于网络表示学习(Network Representation Learning)的复杂网络中有影响力节点识别

基于网络表示学习(Network Representation Learning)的复杂网络中有影响力节点识别

 一个节点所属的社区越多,节点可以影响的社区就越多。 节点的网络约束系数可以看作是社区中的传播速度。 节点的约束系数越小,节点传播信息的速度就越快

1、背景知识

1.1 网络表示学习(NRL)模型

网络表示学习旨在学习网络中每个顶点的分布式向量表示。 它也越来越被认为是网络分析的一个重要方式。 网络表示学习任务可以大致抽象为以下四大类:(a)节点分类,(b)链路预测,©聚类,(d)可视化。
研究者们提出了网络表示学习的BIGCLAM详情戳这里模型,该模型还涵盖了重叠社区检测。该模型假设社区的重叠往往比不重叠部分更密集地连接。
在这里插入图片描述
如上图,顶部的圆圈代表社区,底部的正方形代表图形的节点,边缘表示节点社区从属关系。节点对社区的权重越高,节点就越有可能连接到该社区中的成员。每个社区以概率 1 − e x p ( − F u c ∗ F v c ) 1-exp(-F_{uc }* F_{vc} ) 1exp(FucFvc)在节点 u u u和节点 v v v之间创造连边。其中 F u c F_{uc} Fuc是节点 u u u对社区 c c c的非负权重值。此外,该模型假设每个社区独立地创建边。举个栗子,上图a图中节点 u u u和节点 v v v都同时属于社区A和B。而在上图b中, F u A F_{uA} FuA F u B F_{uB} FuB分别代表了节点 u u u隶属于社区A和B的权重值。在社区A中。节点 u u u和节点 v v v之间存在连边的概率为 1 − e x p ( − F u A ∗ F v A ) 1-exp(-F_{uA}* F_{vA} ) 1exp(FuAFvA),当然,在社区B中则是 1 − e x p ( − F u B ∗ F v B ) 1-exp(-F_{uB}* F_{vB} ) 1exp(FuBFvB)。故节点 u u u和节点 v v v之间存在连边的概率为 1 − e x p ( − ∑ c ∈ A , B F u c ∗ F v c ) 1-exp(-\sum_{c \in A, B}F_{uc}* F_{vc} ) 1exp(cA,BFucFvc)
给定 G = ( V , E ) G=(V,E) G=(V,E)以及非负矩阵 F ∈ R N ∗ K F \in R^{N* K} FRNK,其中 N N N是节点的数目, K K K是社区的数目。BIGCLAM通过以概率 p ( u , v ) p(u,v) p(u,v)在节点对 u , v ∈ V u,v \in V u,vV之间产生边 ( u , v ) (u,v) (u,v)来生成图G:
在这里插入图片描述
其中 F u F_u Fu是节点 u u u的权重向量,向量中的每个值都代表了该节点与对应社区的权重。该模型旨在通过最大似然寻找网络G的最可能的隶属因子矩阵 F ^ \hat{F} F^:
在这里插入图片描述
其中
在这里插入图片描述
我们取似然的对数,称之为对数似然
在这里插入图片描述
倒数第二个公式是该模型的优化对象。在取得 F ^ \hat{F} F^之后,该模型会对节点隶属的社区进行判断。设定一个阈值,当 F u c F_{uc} Fuc大于该阈值,则认为节点 u u u隶属于社区 c c c。基于BIGCLAM模型,我们假设社区中的节点重叠在社区之间起着“桥梁”作用。 由于这些节点属于多个社区,通过这些节点的信息可以很容易地传播到其他社区。

1.2 网络约束系数(Network constraint coefficient)

“structural hole”——有着完整信息源的个体桥梁
下图中(a)就是节点E的 structural hole。节点E的位置使它成为三个不同节点之间的桥梁或“代理。

在这里插入图片描述
通过形成structural hole 来形成网络约束系数:
在这里插入图片描述
其中 Γ ( i ) \Gamma_{(i)} Γ(i)是节点 i i i的邻居, ρ i j = 1 N ( i ) \rho_{ij}= \frac{1}{N(i)} ρij=N(i)1是节点i在与节点j关系中投入精力占比,其中 N ( i ) N(i) N(i)是节点i的度。从上面可以看出,约束系数较小的节点度较大,邻域之间的连接稀疏。 因此,约束系数较小的节点将有更多的机会将信息传播到网络的很大一部分。 节点的约束系数越小,节点传播信息的速度就越快。

2、提出的方法

我们考虑节点的传播容量和传播速度来评估它的影响,用OC表示:
在这里插入图片描述
其中 C k C_k Ck是网络制约系数,maxOC 是归一化因子, N b ( k ) Nb(k) Nb(k)是节点k所属的社区数目。为了识别网络中有影响的节点,我们需要知道每个节点所属的社区总数及其约束系数。 首先,我们使用BIGCLAM模型来检测重叠社区,并计算每个节点所属的社区总数。 第二,我们计算每个节点的网络约束系数

  • 0
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 5
    评论
Convolutional Neural Networks for Image Recognition Abstract: Convolutional Neural Networks (CNNs) have recently shown outstanding performance in many computer vision tasks, especially in image recognition. This paper presents an overview of CNNs, including their architecture, training, and applications. We first introduce the basic concepts of CNNs, including convolution, pooling, and nonlinearity, and then discuss the popular CNN models, such as LeNet-5, AlexNet, VGGNet, and ResNet. The training methods of CNNs, such as backpropagation, dropout, and data augmentation, are also discussed. Finally, we present several applications of CNNs in image recognition, including object detection, face recognition, and scene understanding. Introduction: Image recognition is a fundamental task in computer vision, which aims to classify the objects and scenes in images. Traditional methods for image recognition relied on handcrafted features, such as SIFT, HOG, and LBP, which were then fed into classifiers, such as SVM, boosting, and random forests, to make the final prediction. However, these methods suffered from several limitations, such as the need for manual feature engineering, sensitivity to image variations, and difficulty in handling large-scale datasets. Convolutional Neural Networks (CNNs) have emerged as a powerful technique for image recognition, which can automatically learn the features from raw images and make accurate predictions. CNNs are inspired by the structure and function of the visual cortex in animals, which consists of multiple layers of neurons that are sensitive to different visual features, such as edges, corners, and colors. The neurons in each layer receive inputs from the previous layer and apply a set of learned filters to extract the relevant features. The output of each layer is then fed into the next layer, forming a hierarchical representation of the input image. Architecture of CNNs: The basic building blocks of CNNs are convolutional layers, pooling layers, and fully connected layers. Convolutional layers apply a set of filters to the input image, which convolves the filters with the input to produce a set of activation maps. Each filter is responsible for detecting a specific feature, such as edges, corners, or blobs, at different locations in the input image. Pooling layers reduce the dimensionality of the activation maps by subsampling them, which makes the network more robust to spatial translations and reduces the computational cost. Fully connected layers connect all the neurons in one layer to all the neurons in the next layer, which enables the network to learn complex nonlinear relationships between the features. Training of CNNs: The training of CNNs involves minimizing a loss function, which measures the difference between the predicted labels and the true labels. The most common loss function for image classification is cross-entropy, which penalizes the predicted probabilities that deviate from the true probabilities. Backpropagation is used to compute the gradients of the loss function with respect to the network parameters, which are then updated using optimization algorithms, such as stochastic gradient descent (SGD), Adam, and RMSprop. Dropout is a regularization technique that randomly drops out some neurons during training, which prevents overfitting and improves generalization. Data augmentation is a technique that generates new training examples by applying random transformations to the original images, such as rotation, scaling, and flipping, which increases the diversity and quantity of the training data. Applications of CNNs: CNNs have been successfully applied to various image recognition tasks, including object detection, face recognition, and scene understanding. Object detection aims to localize and classify the objects in images, which is a more challenging task than image classification. The popular object detection frameworks based on CNNs include R-CNN, Fast R-CNN, and Faster R-CNN, which use region proposal methods to generate candidate object locations and then classify them using CNNs. Face recognition aims to identify the individuals in images, which is important for security and surveillance applications. The popular face recognition methods based on CNNs include DeepFace, FaceNet, and VGGFace, which learn the deep features of faces and then use metric learning to compare the similarity between them. Scene understanding aims to interpret the semantic meaning of images, which is important for autonomous driving and robotics applications. The popular scene understanding methods based on CNNs include SegNet, FCN, and DeepLab, which perform pixel-wise classification of the image regions based on the learned features. Conclusion: CNNs have revolutionized the field of computer vision and achieved state-of-the-art performance in many image recognition tasks. The success of CNNs can be attributed to their ability to learn the features from raw images and their hierarchical structure that captures the spatial and semantic relationships between the features. CNNs have also inspired many new research directions, such as deep learning, transfer learning, and adversarial learning, which are expected to further improve the performance and scalability of image recognition systems.

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值