t-sne 数据可视化网络中的部分参数+

       注:本代码主要实现对于网络中对于某个中间特征或计算得到的网络参数的可视化实现。如果仅可视化网络中的某个简单的参数,可以考虑使用 model.weight得到矩阵,然后放入分部代码中的降维可视化部分即可。TSNE参数说明

总体代码

import torch
import torch.nn as nn
import torch.nn.functional as F
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt




class Network(nn.Module):  # extend nn.Module class of nn
    def __init__(self):
        super().__init__()  # super class constructor
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=(5, 5))
        self.batchN1 = nn.BatchNorm2d(num_features=6)
        self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=(5, 5))
        self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)
        self.batchN2 = nn.BatchNorm1d(num_features=120)
        self.fc2 = nn.Linear(in_features=120, out_features=60)
        self.out = nn.Linear(in_features=60, out_features=10)

    def forward(self, t):  # implements the forward method (flow of tensors)

        # hidden conv layer
        t = self.conv1(t)
        t = F.max_pool2d(input=t, kernel_size=2, stride=2)
        t = F.relu(t)
        t = self.batchN1(t)

        # hidden conv layer
        t = self.conv2(t)
        t = F.max_pool2d(input=t, kernel_size=2, stride=2)
        t = F.relu(t)

        # flatten
        t = t.reshape(-1, 12 * 4 * 4)
        t = self.fc1(t)
        t = F.relu(t)
        t = self.batchN2(t)
        t = self.fc2(t)
        t = F.relu(t)

        # output
        t = self.out(t)

        return t
cnn_model = Network() # init model


pretrained_dict = cnn_model.state_dict()




class Identity(nn.Module):
    def __init__(self):
        super(Identity, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=(5, 5))

    def forward(self, x):
        return x


model = Identity()
model_dict = model.state_dict()
# model.load_state_dict(pretrained_dict) # RuntimeError: Error(s) in loading state_dict for Identity:	Unexpected key(s) in state_dict: "batchN1.weight", "batchN1.bias", "batchN1.running_mean",




# 1. filter out unnecessary keys
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
# 2. overwrite entries in the existing state dict
model_dict.update(pretrained_dict)
# 3. load the new state dict
model.load_state_dict(pretrained_dict)

vector = model.conv1.weight.detach().numpy()[0,0,:,:]

digits_final = TSNE(perplexity=30).fit_transform(vector)  #
plt.scatter(digits_final[:,0], digits_final[:,1])
plt.show()

功能分部代码

模型处理部分

import torch
import torch.nn as nn
import torch.nn.functional as F


class Network(nn.Module):  # extend nn.Module class of nn
    def __init__(self):
        super().__init__()  # super class constructor
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=(5, 5))
        self.batchN1 = nn.BatchNorm2d(num_features=6)
        self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=(5, 5))
        self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)
        self.batchN2 = nn.BatchNorm1d(num_features=120)
        self.fc2 = nn.Linear(in_features=120, out_features=60)
        self.out = nn.Linear(in_features=60, out_features=10)

    def forward(self, t):  # implements the forward method (flow of tensors)

        # hidden conv layer
        t = self.conv1(t)
        t = F.max_pool2d(input=t, kernel_size=2, stride=2)
        t = F.relu(t)
        t = self.batchN1(t)

        # hidden conv layer
        t = self.conv2(t)
        t = F.max_pool2d(input=t, kernel_size=2, stride=2)
        t = F.relu(t)

        # flatten
        t = t.reshape(-1, 12 * 4 * 4)
        t = self.fc1(t)
        t = F.relu(t)
        t = self.batchN2(t)
        t = self.fc2(t)
        t = F.relu(t)

        # output
        t = self.out(t)

        return t
cnn_model = Network() # init model


pretrained_dict = cnn_model.state_dict()




class Identity(nn.Module):
    def __init__(self):
        super(Identity, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=(5, 5))

    def forward(self, x):
        return x


model = Identity()
model_dict = model.state_dict()
# model.load_state_dict(pretrained_dict) # RuntimeError: Error(s) in loading state_dict for Identity:	Unexpected key(s) in state_dict: "batchN1.weight", "batchN1.bias", "batchN1.running_mean",




# 1. filter out unnecessary keys
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
# 2. overwrite entries in the existing state dict
model_dict.update(pretrained_dict)
# 3. load the new state dict
model.load_state_dict(pretrained_dict)

降维可视化部分

在这里插入图片描述


import numpy as np


import sklearn #Import scikitlearn for machine learning functionalities
from sklearn.manifold import TSNE
from sklearn.datasets import load_digits # For the UCI ML handwritten digits dataset

import matplotlib # Matplotlib 是 Python 中的一个库,它是 NumPy 库的数值数学扩展
import matplotlib.pyplot as plt
import matplotlib.patheffects as pe


import seaborn as sb


digits = load_digits()
print(digits.data.shape) # There are 10 classes (0 to 9) with alomst 180 images in each class
                         # The images are 8x8 and hence 64 pixels(dimensions)
plt.gray();
#Displaying what the standard images look like
for i in range(0,10):
    plt.matshow(digits.images[i])
    plt.show()


X = np.vstack([digits.data[digits.target==i] for i in range(10)]) # Place the arrays of data of each digit on top of each other and store in X
# X = np.random.random([1797, 64])

#Implementing the TSNE Function - ah Scikit learn makes it so easy!
digits_final = TSNE(perplexity=30).fit_transform(X) # plt.scatter(digits_final[0], digits_final[1])
#Play around with varying the parameters like perplexity, random_state to get different plots


# With the above line, our job is done. But why did we even reduce the dimensions in the first place?
# To visualise it on a graph.

# So, here is a utility function that helps to do a scatter plot of thee transformed data

def plot(x, colors):
    palette = np.array(sb.color_palette("hls", 10))  # Choosing color palette

    # Create a scatter plot.
    f = plt.figure(figsize=(8, 8))
    ax = plt.subplot(aspect='equal')
    sc = ax.scatter(x[:, 0], x[:, 1], lw=0, s=40, c=palette[colors.astype(np.int)])

    
    # 添加文本
    txts = []
    # for i in range(10):
    #     # Position of each label.
    #     xtext, ytext = np.median(x[colors == i, :], axis=0) # 返回数组元素的中位数。
    #     txt = ax.text(xtext, ytext, str(i), fontsize=24)  # Text(6.610861, 37.19979, '9')
    #     txt.set_path_effects([pe.Stroke(linewidth=5, foreground="w"), pe.Normal()])# 文本效果
    #     txts.append(txt)
    return f, ax, txts




Y = np.hstack([digits.target[digits.target==i] for i in range(10)]) # Place the arrays of data of each target digit by the side of each other continuosly and store in Y
plot(digits_final,Y)
plt.show()

t-sne 数据可视化的数学解释

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

参考与更多

pytorch官方保存加载模型教程(包含使用以 TorchScript 格式导出/加载模型)

模型字典的修改: https://discuss.pytorch.org/t/how-to-load-part-of-pre-trained-model/1113/2

scikit-learn.org

https://github.com/shivanichander/tSNE/blob/master/Code/tSNE%20Code.ipynb

t-SNE:最好的降维方法之一 - 知乎 (zhihu.com)

https://discuss.pytorch.org/t/changing-state-dict-value-is-not-changing-model/88695/2
model = nn.Linear(1, 1)
print(model.weight)

更多方法

# ISOMAP https://scikit-learn.org.cn/view/452.html
from sklearn.manifold import Isomap
digits_final = Isomap(n_components=2).fit_transform(res)

UMAP和TSNE:总述(持续更新)

t-sne(t-Distributed Stochastic Neighbor Embedding)数据可视化是一种常用的降维算法,用于将高维数据映射到低维空间,以便于数据的可视化展示。 在MATLAB,我们可以使用已有的工具箱或自己编写程序来实现t-sne数据可视化。以下是一种用MATLAB编写程序的示例: 1. 导入数据:首先,我们需要导入待处理的高维数据。可以使用`load`函数或其他读取数据的函数将数据加载到MATLAB。 2. 数据预处理:针对不同的数据类型和目的,我们可能需要对数据进行预处理。例如,可以进行归一化、去除异常值或缺失值等操作。 3. t-sne降维:接下来,使用`tSNE`函数进行降维。该函数可以设置不同的参数,如迭代次数、学习率、初始维度、输出维度等。例如,可以使用以下代码将数据降维到2维: ``` rng('default'); % 设置随机种子,保证结果可复现 tsne_result = tsne(data, 'NumDimensions', 2); ``` 4. 数据可视化:最后,使用MATLAB的绘图函数将降维后的数据可视化。常见的绘图函数包括`scatter`、`scatter3`、`plot`等。例如,可以使用以下代码将降维后的数据绘制成散点图: ``` scatter(tsne_result(:, 1), tsne_result(:, 2)); ``` 以上就是一个简单的t-sne数据可视化MATLAB程序的示例。根据具体的数据和需求,可能需要进行更多的参数配置和绘图设置。使用MATLAB的这些基本步骤,可以轻松实现t-sne数据可视化
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值