Tensorflow--预测乳腺癌

Tensorflow–预测乳腺癌

基于从图像中抽取的反应细胞核的特征数据,采用神经网络算法,借助Tensorflow工具来预测癌症是良性还是恶性.数据是从乳房块的细针抽吸(FNA)的数字化图像数据,它们描述了图像中存在的细胞核的特征.具体步骤如下:

  • 数据说明
  • 数据预处理
  • 数据探索
  • 构建神经网络
  • 训练神经网络
  • 评估模型

一.数据说明

数据可以通过以下方式获取到:链接

从每幅医疗诊断图像中,计算出反应细胞核每个特征的平均值,标准误差,"最差"或最大值(三个最大值的平均值),从而产生30个特征.所有特征值都用四个有效数字重新编码:缺少属性值为None,分类为357良性,212恶性

属性信息:

  • 身份证号码
  • 诊断(M=恶性,B=良性)

计算每个细胞核的10个实值特征:

  • 半径
  • 文理
  • 周长
  • 面积
  • 平滑度
  • 紧凑度
  • 凹面
  • 凹点
  • 对称性
  • 分形维数

查看文件前2行数据:

!head -2 data.csv
"id","diagnosis","radius_mean","texture_mean","perimeter_mean","area_mean","smoothness_mean","compactness_mean","concavity_mean","concave points_mean","symmetry_mean","fractal_dimension_mean","radius_se","texture_se","perimeter_se","area_se","smoothness_se","compactness_se","concavity_se","concave points_se","symmetry_se","fractal_dimension_se","radius_worst","texture_worst","perimeter_worst","area_worst","smoothness_worst","compactness_worst","concavity_worst","concave points_worst","symmetry_worst","fractal_dimension_worst",
842302,M,17.99,10.38,122.8,1001,0.1184,0.2776,0.3001,0.1471,0.2419,0.07871,1.095,0.9053,8.589,153.4,0.006399,0.04904,0.05373,0.01587,0.03003,0.006193,25.38,17.33,184.6,2019,0.1622,0.6656,0.7119,0.2654,0.4601,0.1189

二.数据预处理

1.导入需要的包

import tensorflow as tf
import pandas as pd
from sklearn.utils import shuffle
import matplotlib.gridspec as gridspec
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.font_manager as fm
myfont=fm.FontProperties(fname="E:/Anaconda/envs/mytensorflow/Lib/site-packages/matplotlib/mpl-data/fonts/ttf/Simhei.ttf")

2.定义数据文件路径:

train_finename="./data/BreastCancer/data.csv"

3.重新设置字段名称

idKey = "id"
diagnosisKey = "diagnosis"
radiusMeanKey = "radius_mean"
textureMeanKey = "texture_mean"
perimeterMeanKey = "perimeter_mean"
areaMeanKey = "area_mean"
smoothnessMeanKey = "smoothness_mean"
compactnessMeanKey = "compactness_mean"
concavityMeanKey = "concavity_mean"
concavePointsMeanKey = "concave points_mean"
symmetryMeanKey = "symmetry_mean"
fractalDimensionMean = "fractal_dimension_mean"
radiusSeKey = "radius_se"
textureSeKey = "texture_se"
perimeterSeKey = "perimeter_se"
areaSeKey = "area_se"
smoothnessSeKey = "smoothness_se"
compactnessSeKey = "compactness_se"
concavitySeKey = "concavity_se"
concavePointsSeKey = "concave points_se"
symmetrySeKey = "symmetry_se"
fractalDimensionSeKey = "fractal_dimension_se"
radiusWorstKey = "radius_worst"
textureWorstKey = "texture_worst"
perimeterWorstKey = "perimeter_worst"
areaWorstKey = "area_worst"
smoothnessWorstKey = "smoothness_worst"
compactnessWorstKey = "compactness_worst"
concavityWorstKey = "concavity_worst"
concavePointsWorstKey = "concave points_worst"
symmetryWorstKey = "symmetry_worst"
fractalDimensionWorstKey = "fractal_dimension_worst"

4.选择训练集列名

train_columns = [idKey, 
                 diagnosisKey, 
                 radiusMeanKey, 
                 textureMeanKey, 
                 perimeterMeanKey, 
                 areaMeanKey, 
                 smoothnessMeanKey, 
                 compactnessMeanKey, 
                 concavityMeanKey, 
                 concavePointsMeanKey, 
                 symmetryMeanKey, 
                 fractalDimensionMean, 
                 radiusSeKey, 
                 textureSeKey, 
                 perimeterSeKey, 
                 areaSeKey, 
                 smoothnessSeKey, 
                 compactnessSeKey, 
                 concavitySeKey, 
                 concavePointsSeKey, 
                 symmetrySeKey, 
                 fractalDimensionSeKey, 
                 radiusWorstKey, 
                 textureWorstKey, 
                 perimeterWorstKey, 
                 areaWorstKey, 
                 smoothnessWorstKey, 
                 compactnessWorstKey, 
                 concavityWorstKey, 
                 concavePointsWorstKey, 
                 symmetryWorstKey, 
                 fractalDimensionWorstKey]

5.定义读取数据函数

文件以逗号分隔,第一行为标题(跳过):

def get_train_data():
    df=pd.read_csv(train_finename,names=train_columns,delimiter=",",skiprows=1)
    return df

train_data=get_train_data()

三.探索数据

1.查看前5行数据:

train_data.head()
iddiagnosisradius_meantexture_meanperimeter_meanarea_meansmoothness_meancompactness_meanconcavity_meanconcave points_meansymmetry_meanfractal_dimension_meanradius_setexture_seperimeter_searea_sesmoothness_secompactness_seconcavity_seconcave points_sesymmetry_sefractal_dimension_seradius_worsttexture_worstperimeter_worstarea_worstsmoothness_worstcompactness_worstconcavity_worstconcave points_worstsymmetry_worstfractal_dimension_worst
0842302M17.9910.38122.801001.00.118400.277600.30010.147100.24190.078711.09500.90538.589153.400.0063990.049040.053730.015870.030030.00619325.3817.33184.602019.00.16220.66560.71190.26540.46010.11890
1842517M20.5717.77132.901326.00.084740.078640.08690.070170.18120.056670.54350.73393.39874.080.0052250.013080.018600.013400.013890.00353224.9923.41158.801956.00.12380.18660.24160.18600.27500.08902
284300903M19.6921.25130.001203.00.109600.159900.19740.127900.20690.059990.74560.78694.58594.030.0061500.040060.038320.020580.022500.00457123.5725.53152.501709.00.14440.42450.45040.24300.36130.08758
384348301M11.4220.3877.58386.10.142500.283900.24140.105200.25970.097440.49561.15603.44527.230.0091100.074580.056610.018670.059630.00920814.9126.5098.87567.70.20980.86630.68690.25750.66380.17300
484358402M20.2914.34135.101297.00.100300.132800.19800.104300.18090.058830.75720.78135.43894.440.0114900.024610.056880.018850.017560.00511522.5416.67152.201575.00.13740.20500.40000.16250.23640.07678

2.查看数据特征信息

# 查看数据的统计信息
train_data.describe()
idradius_meantexture_meanperimeter_meanarea_meansmoothness_meancompactness_meanconcavity_meanconcave points_meansymmetry_meanfractal_dimension_meanradius_setexture_seperimeter_searea_sesmoothness_secompactness_seconcavity_seconcave points_sesymmetry_sefractal_dimension_seradius_worsttexture_worstperimeter_worstarea_worstsmoothness_worstcompactness_worstconcavity_worstconcave points_worstsymmetry_worstfractal_dimension_worst
count5.690000e+02569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000569.000000
mean3.037183e+0714.12729219.28964991.969033654.8891040.0963600.1043410.0887990.0489190.1811620.0627980.4051721.2168532.86605940.3370790.0070410.0254780.0318940.0117960.0205420.00379516.26919025.677223107.261213880.5831280.1323690.2542650.2721880.1146060.2900760.083946
std1.250206e+083.5240494.30103624.298981351.9141290.0140640.0528130.0797200.0388030.0274140.0070600.2773130.5516482.02185545.4910060.0030030.0179080.0301860.0061700.0082660.0026464.8332426.14625833.602542569.3569930.0228320.1573360.2086240.0657320.0618670.018061
min8.670000e+036.9810009.71000043.790000143.5000000.0526300.0193800.0000000.0000000.1060000.0499600.1115000.3602000.7570006.8020000.0017130.0022520.0000000.0000000.0078820.0008957.93000012.02000050.410000185.2000000.0711700.0272900.0000000.0000000.1565000.055040
25%8.692180e+0511.70000016.17000075.170000420.3000000.0863700.0649200.0295600.0203100.1619000.0577000.2324000.8339001.60600017.8500000.0051690.0130800.0150900.0076380.0151600.00224813.01000021.08000084.110000515.3000000.1166000.1472000.1145000.0649300.2504000.071460
50%9.060240e+0513.37000018.84000086.240000551.1000000.0958700.0926300.0615400.0335000.1792000.0615400.3242001.1080002.28700024.5300000.0063800.0204500.0258900.0109300.0187300.00318714.97000025.41000097.660000686.5000000.1313000.2119000.2267000.0999300.2822000.080040
75%8.813129e+0615.78000021.800000104.100000782.7000000.1053000.1304000.1307000.0740000.1957000.0661200.4789001.4740003.35700045.1900000.0081460.0324500.0420500.0147100.0234800.00455818.79000029.720000125.4000001084.0000000.1460000.3391000.3829000.1614000.3179000.092080
max9.113205e+0828.11000039.280000188.5000002501.0000000.1634000.3454000.4268000.2012000.3040000.0974402.8730004.88500021.980000542.2000000.0311300.1354000.3960000.0527900.0789500.02984036.04000049.540000251.2000004254.0000000.2226001.0580001.2520000.2910000.6638000.207500
# 查看是否有空值
train_data.isnull().sum()
id                         0
diagnosis                  0
radius_mean                0
texture_mean               0
perimeter_mean             0
area_mean                  0
smoothness_mean            0
compactness_mean           0
concavity_mean             0
concave points_mean        0
symmetry_mean              0
fractal_dimension_mean     0
radius_se                  0
texture_se                 0
perimeter_se               0
area_se                    0
smoothness_se              0
compactness_se             0
concavity_se               0
concave points_se          0
symmetry_se                0
fractal_dimension_se       0
radius_worst               0
texture_worst              0
perimeter_worst            0
area_worst                 0
smoothness_worst           0
compactness_worst          0
concavity_worst            0
concave points_worst       0
symmetry_worst             0
fractal_dimension_worst    0
dtype: int64
# 查看属于恶性的统计数据
print("恶性")
print(train_data.area_mean[train_data.diagnosis=="M"].describe())
恶性
count     212.000000
mean      978.376415
std       367.937978
min       361.600000
25%       705.300000
50%       932.000000
75%      1203.750000
max      2501.000000
Name: area_mean, dtype: float64
# 查看属于良性的统计数据
print("良性")
print(train_data.area_mean[train_data.diagnosis=="B"].describe())
良性
count    357.000000
mean     462.790196
std      134.287118
min      143.500000
25%      378.200000
50%      458.400000
75%      551.100000
max      992.100000
Name: area_mean, dtype: float64
# 可视化这些数据
f,(ax1,ax2)=plt.subplots(2,1,sharex=True,figsize=(12,4))

bins=50

ax1.hist(train_data.area_mean[train_data.diagnosis=="M"],bins=bins)
ax1.set_title("恶性",fontproperties=myfont)

ax2.hist(train_data.area_mean[train_data.diagnosis=="B"],bins=bins)
ax2.set_title("良性",fontproperties=myfont)

plt.xlabel("区域平均值",fontproperties=myfont)
plt.ylabel("诊断次数",fontproperties=myfont)
plt.show()

output_27_0.png

"area_mean"特征看起来差别比较大,这会增加其在两种类型诊断中的价值.此外恶性诊断更多是均匀分布的,而良性诊断具有正态分布.当其值超过750时,可以更容易做出恶性诊断

4.查看其它特征的特性

r_data=train_data.drop([idKey,areaMeanKey,areaWorstKey,diagnosisKey],axis=1)
r_features=r_data.columns

# 可视化其他特征分布信息
plt.figure(figsize=(12,28*4))
gs=gridspec.GridSpec(28,1)

for i,cn in enumerate(r_data[r_features]):
    ax=plt.subplot(gs[i])
    sns.distplot(train_data[cn][train_data.diagnosis=="M"],bins=50)
    sns.distplot(train_data[cn][train_data.diagnosis=="B"],bins=50)
    ax.set_xlabel("")
    ax.set_title("特征直方图:"+str(cn),fontproperties=myfont)
    
plt.show()

output_30_0.png

5.对一些特征进行转换

# 更新诊断值.1代表恶性,0代表为良性
train_data.loc[train_data.diagnosis=="M","diagnosis"]=1
train_data.loc[train_data.diagnosis=="B","diagnosis"]=0

# 创建良性诊断的新特征
train_data.loc[train_data.diagnosis==0,"benign"]=1
train_data.loc[train_data.diagnosis==1,"benign"]=0

# 把这列数据类型转换为int
train_data["benign"]=train_data.benign.astype(int)

# 把列 diagnosis 重命名为 malignant
train_data=train_data.rename(columns={"diagnosis":"malignant"})

# 212例恶性诊断,357例良性诊断.37.25%的诊断是恶性的
print(train_data.benign.value_counts())
print(train_data.malignant.value_counts())

# 查看前几行数据
pd.set_option("display.max_columns",101)

train_data.head()
1    357
0    212
Name: benign, dtype: int64
0    357
1    212
Name: malignant, dtype: int64
idmalignantradius_meantexture_meanperimeter_meanarea_meansmoothness_meancompactness_meanconcavity_meanconcave points_meansymmetry_meanfractal_dimension_meanradius_setexture_seperimeter_searea_sesmoothness_secompactness_seconcavity_seconcave points_sesymmetry_sefractal_dimension_seradius_worsttexture_worstperimeter_worstarea_worstsmoothness_worstcompactness_worstconcavity_worstconcave points_worstsymmetry_worstfractal_dimension_worstbenign
0842302117.9910.38122.801001.00.118400.277600.30010.147100.24190.078711.09500.90538.589153.400.0063990.049040.053730.015870.030030.00619325.3817.33184.602019.00.16220.66560.71190.26540.46010.118900
1842517120.5717.77132.901326.00.084740.078640.08690.070170.18120.056670.54350.73393.39874.080.0052250.013080.018600.013400.013890.00353224.9923.41158.801956.00.12380.18660.24160.18600.27500.089020
284300903119.6921.25130.001203.00.109600.159900.19740.127900.20690.059990.74560.78694.58594.030.0061500.040060.038320.020580.022500.00457123.5725.53152.501709.00.14440.42450.45040.24300.36130.087580
384348301111.4220.3877.58386.10.142500.283900.24140.105200.25970.097440.49561.15603.44527.230.0091100.074580.056610.018670.059630.00920814.9126.5098.87567.70.20980.86630.68690.25750.66380.173000
484358402120.2914.34135.101297.00.100300.132800.19800.104300.18090.058830.75720.78135.43894.440.0114900.024610.056880.018850.017560.00511522.5416.67152.201575.00.13740.20500.40000.16250.23640.076780

6.对数据进行一些预处理

# 创建一个只有 malignant,benign的Dataframe
Malignant=train_data[train_data.malignant==1]
Benign=train_data[train_data.benign==1]

# 将train_x设置为恶性诊断的80%
train_x=Malignant.sample(frac=0.8)
count_Malignants=len(train_x)

# 将80%的良性诊断添加到train_x
train_x=pd.concat([train_x,Benign.sample(frac=0.8)],axis=0)

# 使test_x包含不在train_x中的数据
test_x=train_data.loc[~train_data.index.isin(train_x.index)]

# 使用shuffle函数打乱数据
train_x=shuffle(train_x)
test_x=shuffle(test_x)

# 把标签添加到 train_x,test_y
train_y=train_x.malignant
train_y=pd.concat([train_y,train_x.benign],axis=1)

test_y=test_x.malignant
test_y=pd.concat([test_y,test_x.benign],axis=1)

# 删除train_x,test_x中的标签
train_x=train_x.drop(["malignant","benign"],axis=1)
test_x=test_x.drop(["malignant","benign"],axis=1)

# 核查训练集和测试集数据总数
print(len(train_x))
print(len(train_y))
print(len(test_x))
print(len(test_y))
456
456
113
113
# 提取训练集中所有特征名称
features=train_x.columns.values

# 规范化各特征的值
for feature in features:
    mean,std=train_data[feature].mean(),train_data[feature].std()
    train_x.loc[:,feature]=(train_x[feature]-mean)/std
    test_x.loc[:,feature]=(test_x[feature]-mean)/std

四.构建神经网络

构建1个输入层,4个隐含层,1个输出层的神经网络,隐含层的参数初始化满足正态分布

tf.truncated_normal使用方法如下:

tf.truncated_normal(shape,mean=0.0,stddev=1.0,dtype=tf.float32,seed=None,name=None)

从截断的正态分布中输出随机值.生成的值服从具有指定平均值和标准偏差的正态分布,如果生成的值大于平均值2个标准偏差的值则丢弃并重新选择

五.训练并评估模型

训练模型迭代次数由参数training_epochs确定,批次大小由参数batch_size确定

# 设置参数
learning_rate=0.005
training_dropout=0.9
display_step=1
training_epochs=5
batch_size=100
accuracy_history=[]
cost_history=[]
valid_accuracy_history=[]
valid_cost_history=[]

# 获取输入节点数
input_nodes=train_x.shape[1]
# 设置标签类别数
num_labels=2
# 把测试数据划分为验证集和测试集
split=int(len(test_y)/2)

train_size=train_x.shape[0]
n_samples=train_y.shape[0]

input_x=train_x.as_matrix()
input_y=train_y.as_matrix()
input_x_valid=test_x.as_matrix()[:split]
input_y_valid=test_y.as_matrix()[:split]
input_x_test=test_x.as_matrix()[split:]
input_y_test=test_y.as_matrix()[split:]
E:\Anaconda\envs\mytensorflow\lib\site-packages\ipykernel_launcher.py:22: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
E:\Anaconda\envs\mytensorflow\lib\site-packages\ipykernel_launcher.py:23: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
E:\Anaconda\envs\mytensorflow\lib\site-packages\ipykernel_launcher.py:24: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
E:\Anaconda\envs\mytensorflow\lib\site-packages\ipykernel_launcher.py:25: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
E:\Anaconda\envs\mytensorflow\lib\site-packages\ipykernel_launcher.py:26: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
E:\Anaconda\envs\mytensorflow\lib\site-packages\ipykernel_launcher.py:27: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
# 设置每个隐含层的节点数
def calculate_hidden_nodes(nodes):
    return (((2*nodes)/3)+num_labels)

hidden_nodes1=round(calculate_hidden_nodes(input_nodes))
hidden_nodes2=round(calculate_hidden_nodes(hidden_nodes1))
hidden_nodes3=round(calculate_hidden_nodes(hidden_nodes2))
print(input_nodes,hidden_nodes1,hidden_nodes2,hidden_nodes3)
31 23 17 13
# 设置保存进行dropout操作时保留节点的比例变量
pkeep = tf.placeholder(tf.float32)

# 定义输入层
x = tf.placeholder(tf.float32, [None, input_nodes])

# 定义第一个隐含层layer1,,初始化为截断的正态分布
W1 = tf.Variable(tf.truncated_normal([input_nodes, hidden_nodes1], stddev = 0.15))
b1 = tf.Variable(tf.zeros([hidden_nodes1]))
y1 = tf.nn.relu(tf.matmul(x, W1) + b1)

# 定义第二个隐含层layer2,初始化为截断的正态分布
W2 = tf.Variable(tf.truncated_normal([hidden_nodes1, hidden_nodes2], stddev = 0.15))
b2 = tf.Variable(tf.zeros([hidden_nodes2]))
y2 = tf.nn.relu(tf.matmul(y1, W2) + b2)

# 定义第三个隐含层layer3,初始化为截断的正态分布
W3 = tf.Variable(tf.truncated_normal([hidden_nodes2, hidden_nodes3], stddev = 0.15)) 
b3 = tf.Variable(tf.zeros([hidden_nodes3]))
y3 = tf.nn.relu(tf.matmul(y2, W3) + b3)
y3 = tf.nn.dropout(y3, pkeep)

# 定义第四个隐含层layer4,初始化为截断的正态分布
W4 = tf.Variable(tf.truncated_normal([hidden_nodes3, 2], stddev = 0.15)) 
b4 = tf.Variable(tf.zeros([2]))
y4 = tf.nn.softmax(tf.matmul(y3, W4) + b4)

# 定义输出层
y=y4
y_=tf.placeholder(tf.float32,[None,num_labels])

# 使用交叉熵最小化误差
cost=-tf.reduce_sum(y_*tf.log(y))

# 使用Adam作为优化器
optimizer=tf.train.AdamOptimizer(learning_rate).minimize(cost)

# 测试模型
correct_prediction=tf.equal(tf.argmax(y,1),tf.argmax(y_,1))

# 计算精度
accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

# 初始化变量
init=tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    for epoch in range(training_epochs):
        for batch in range(int(n_samples/batch_size)):
            batch_x=input_x[batch*batch_size:(1+batch)*batch_size]
            batch_y=input_y[batch*batch_size:(1+batch)*batch_size]
            
            sess.run([optimizer],feed_dict={x:batch_x,y_:batch_y,pkeep:training_dropout})
        

         # 循环10次打印日志信息
        if (epoch)%display_step==0:
            train_accuracy,newCost=sess.run([accuracy,cost],feed_dict={x:input_x,y_:input_y,pkeep:training_dropout})
            valid_accuracy,valid_newCost=sess.run([accuracy,cost],feed_dict={x:input_x_valid,y_:input_y_valid,pkeep:1})
            print("Epoch:",epoch,"Acc=","{:.5f}".format(train_accuracy),
                      "Cost=","{:.5f}".format(newCost),
                      "Valid_Acc=","{:.5f}".format(valid_accuracy),
                      "Valid_Cost=","{:.5f}".format(valid_newCost))
            # 记录模型结果
            accuracy_history.append(train_accuracy)
            cost_history.append(newCost)
            valid_accuracy_history.append(valid_accuracy)
            valid_cost_history.append(valid_newCost)
                
            # 如若15次日志信息并没有改善,停止迭代
            if valid_accuracy<max(valid_accuracy_history) and epoch >100:
                stop_early+=1
                if stop_early==15:
                    break
            else:
                stop_early=0
                    
    # 可视化精度及损失值
    f,(ax1,ax2)=plt.subplots(2,1,sharex=True,figsize=(10,4))
    
    ax1.plot(accuracy_history,color="b")
    ax1.plot(valid_accuracy_history,color="g")
    ax1.set_title("精度",fontproperties=myfont)
    
    ax2.plot(cost_history,color="b")
    ax2.plot(valid_cost_history,color="g")
    ax2.set_title("损失值",fontproperties=myfont)
    
    plt.xlabel("迭代次数(x10)",fontproperties=myfont)
    plt.show()
Epoch: 0 Acc= 0.71053 Cost= 291.99823 Valid_Acc= 0.83929 Valid_Cost= 35.09937
Epoch: 1 Acc= 0.79386 Cost= 249.30136 Valid_Acc= 0.85714 Valid_Cost= 28.89331
Epoch: 2 Acc= 0.88377 Cost= 194.89590 Valid_Acc= 0.94643 Valid_Cost= 20.11331
Epoch: 3 Acc= 0.93421 Cost= 138.11267 Valid_Acc= 0.96429 Valid_Cost= 11.85844
Epoch: 4 Acc= 0.94518 Cost= 95.28279 Valid_Acc= 0.98214 Valid_Cost= 6.43805

output_46_1.png

随着训练次数的不断增加,模型的精度越来越高,损失值越来越小

  • 0
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值