Tensorflow
柳暗花明又一村ヾ(◍°∇°◍)ノ゙
这个作者很懒,什么都没留下…
展开
-
1DCNN 2DCNN LeNet5,VGGNet16使用tensorflow2.X实现
1DCNN是1维卷积2DCNN是两层卷积,+池化层leNet5是两段卷积层+池化层,最后加三层全连接层VGGNet16总共分为八段:from tensorflow.keras.models import Sequentialfrom tensorflow.keras import layersfrom tensorflow import kerasdef LeNet_CNNmodel(): model = keras.models.Sequential([ laye原创 2020-06-18 19:27:14 · 848 阅读 · 0 评论 -
DARPA、NSL-KDD、Adult等安全方面的数据集下载地址
http://users.cis.fiu.edu/~lpeng/Datasets_detail.html原创 2020-06-01 19:03:25 · 1714 阅读 · 0 评论 -
用moore数据集做网络流量分类
用于做网络流量分类下载地址https://www.cl.cam.ac.uk/research/srg/netos/projects/archive/nprobe/data/papers/sigmetrics/index.html原创 2020-06-06 09:45:08 · 4505 阅读 · 9 评论 -
tensor 与numpy互相转换
这里是tensorflow2.0版本numpy 转tensorimport tensorflow as tftf.convert_to_tensor(numpy_data)tensor转numpytensor_data.numpy()原创 2020-04-28 16:49:57 · 628 阅读 · 0 评论 -
Tensorflow卷积神经网络Cifar100实战
import tensorflow as tffrom tensorflow import kerasfrom tensorflow.keras import datasets, layers, optimizers, Sequentialdef preprocess(x, y): x = tf.cast(x, dtype = tf.float32) / 255. y ...原创 2020-03-19 20:55:48 · 531 阅读 · 0 评论 -
Tensorflow earlying_stop,dropout
都是为了防止过拟合tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activ...原创 2020-03-18 13:16:06 · 147 阅读 · 0 评论 -
Tensorflow cifar10自定义网络实战
import tensorflow as tffrom tensorflow import kerasfrom tensorflow.keras import datasets, layers, optimizersdef preprocess(x, y): #[0-255] > [-1, -1] x = 2 * tf.cast(x, dtype = tf.float...原创 2020-03-18 09:19:36 · 90 阅读 · 0 评论 -
Tensorflow 模型保存与加载
save/load weightssave/load entire modelsaved_model1、#保存训练模型model.save_weights( filepath, overwrite=True, save_format=None)加载训练模型model.load_weights( filepath, overwrite=True, save_format=No...原创 2020-03-17 20:47:09 · 300 阅读 · 0 评论 -
Tensorboard自定义网络
keras.Sequentialkeara.layers.Layerkeras.Modelfrom tensorflow import kerasfrom tensorflow.keras import layers, optimizersimport tensorflow as tfclass MyDense(layers.Layer): def __init__(sel...原创 2020-03-17 20:06:14 · 146 阅读 · 0 评论 -
Tensorflow手写数据集Mnist实战-(梯度下降)
import tensorflow as tffrom tensorflow import kerasfrom tensorflow.keras import layers, optimizers, Sequential, metrics#translate data typedef preprocess(x, y): x= tf.cast(x, dtype = tf.float3...原创 2020-03-16 15:48:55 · 165 阅读 · 0 评论 -
Tensorflow 2D函数优化
import numpy as npimport tensorflow as tfimport matplotlib.pyplot as pltdef himmelblau(x): return (x[0] ** 2 + x[1] -11) ** 2 + (x[0] +x[1] ** 2 - 7) ** 2x = np.arange(-6, 6, 0.1)y = np.aran...原创 2020-03-16 13:35:13 · 93 阅读 · 0 评论 -
Tensorflow链式法则
x=tf.constant(1.)w1=tf.constant(2.)b1=tf.constant(1.)w2=tf.constant(2.)b2=tf.constant(1.)with tf.GradientTape(persistent=True) as tape: tape.watch([w1,b1,w2,b2]) y1=x*w1+b1 y2=y1*w2+...原创 2020-03-16 11:36:48 · 161 阅读 · 0 评论 -
Tensorflow单、多输出感知机及其梯度
单输出感知机x=tf.random.normal([1,3])y=tf.constant([1])w=tf.ones([3,1])b=tf.ones([1])with tf.GradientTape() as tape: tape.watch([w,b]) logits=x@w+b loss=tf.reduce_mean(tf.losses.MSE(y,logi...原创 2020-03-16 11:20:42 · 129 阅读 · 0 评论 -
Tensorflow损失函数及其梯度
x=tf.random.normal([2,4])w=tf.random.normal([4,3])b=tf.zeros([3])y=tf.constant([2,0])with tf.GradientTape() as tape: tape.watch([w,b]) prob=tf.nn.softmax(x@w+b,axis=1) loss=tf.reduce...原创 2020-03-15 18:50:27 · 242 阅读 · 0 评论 -
Tensorflow激活函数及梯度
a=tf.linspace(-10.,10.,10)with tf.GradientTape() as tape: tape.watch(a) y=tf.sigmoid(a)grads=tape.gradient(y,[a])grads[<tf.Tensor: shape=(10,), dtype=float32, numpy=array([4.5395809e...原创 2020-03-15 18:14:38 · 100 阅读 · 0 评论 -
Tensorflow误差计算
y=tf.constant([1,2,3,0,2])y=tf.one_hot(y,depth=4)y<tf.Tensor: shape=(5, 4), dtype=float32, numpy=array([[0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.], [1., 0., 0., ...原创 2020-03-15 13:53:34 · 107 阅读 · 0 评论 -
Tensorflow数据集加载
kera.datasets数据集:mnist,IMDB,cifar10,…举例from tensorflow import keras(x,y),_ =keras.datasets.mnist.load_data()x.shape(60000, 28, 28)y.shape(60000,)(x,y),(x_test,y_test)=keras.datasets.mnist...原创 2020-03-15 11:05:34 · 208 阅读 · 0 评论 -
Tensorflow高阶操作(张量)
标题合并与分割tf.concat #拼接tf.split #分割tf.stack #tf.unstack.a = tf.ones([4,35,8])b = tf.ones([1,35,8])tf.concat([a,b], axis=0).shapeTensorShape([5, 35, 8])a = tf.ones([4,35,8])b = tf.ones([4,35,8]...原创 2020-03-14 14:14:55 · 180 阅读 · 0 评论 -
Tensorflow前向传播(张量)
out=relu{relu{relu[X@W1+b1]@W2+b2}@W3+b3}pred=argmax(out)loss=MSE(out,label)原创 2020-03-11 11:35:34 · 208 阅读 · 0 评论 -
Tensorflow数学运算
±*/**,tf.pow,tf.squaresqrt//,%tf.exp,tf.log@,tf.matmul 矩阵相乘linear layerreduce_mean/max/min/sumtf.math.log()等价于loge()b=tf.fill([2,2],2.)a=tf.ones([2,2])a+b,a-b,a*b,a/b(<tf.Tensor: sha...原创 2020-03-08 11:11:33 · 99 阅读 · 0 评论 -
Tensorflow维度扩张broadcast、tile
tf.broadcast_tob = tf.broadcast_to(tf.random.normal([4,1,1,1]),[4,32,32,3])b.shapeTensorShape([4, 32, 32, 3])a=tf.ones([3,4])a1=tf.broadcast_to(a,[2,3,4])TensorShape([2, 3, 4])tf.tilea2=tf.ex...原创 2020-03-08 10:37:35 · 368 阅读 · 0 评论 -
Tensorflow维度变换
变换维度reshapea = tf.random.normal([4,28,28,3])a.shape,a.ndim(TensorShape([4, 28, 28, 3]), 4)tf.reshape(a,[4,784,3]).shapeTensorShape([4, 784, 3])tf.reshape(a,[4,-1,3]).shapeTensorShape([4, 784, 3...原创 2020-03-08 10:07:10 · 127 阅读 · 0 评论 -
Tensorflow数据索引和切片
主要讲Tensor数据的索引和切片import tensorflow as tfa = tf.ones([1,5,5,3])#通用a[0][0]a[0][0][2]a= tf.random.normal([4,28,28,5])a[1].shapea[1,2].shapea[1,2,3]#start:enda = tf.range(10)a[-1:]a[-2:]a[:...原创 2020-03-07 16:34:16 · 159 阅读 · 0 评论 -
Tensorflow数据类型
tensorflow数据类型tf.Tensor:int,float,double,bool,stringimport tensorflow as tf eg:tf.constant(1)<tf.Tensor: shape=(), dtype=int32, numpy=1>tf.constant(1.)<tf.Tensor: shape=(), dtype=flo...原创 2020-03-07 14:38:19 · 243 阅读 · 0 评论 -
python利用梯度下降法做线性回归
需要注意的是计算步长一定要小,不然计算误差太大# -*- coding:utf-8 -*-#import tensorflow as tfimport numpyfrom matplotlib import pyplot#lossdef compute_error(b ,w ,points): totalerror = 0 for i in range(0, len(...原创 2020-03-07 09:52:24 · 245 阅读 · 0 评论