自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(199)
  • 资源 (12)
  • 问答 (1)
  • 收藏
  • 关注

原创 toxic comment classification 数据集

https://huggingface.co/transformers/installation.htmlhttps://leemeng.tw/attack_on_bert_transfer_learning_in_nlp.htmlhttps://zhuanlan.zhihu.com/p/49271699https://zhuanlan.zhihu.com/p/51413773https://tech.meituan.com/2019/11/14/nlp-bert-practice.htmlhtt

2020-08-27 20:20:52 1129

原创 pytorch LSTM_regression

https://zhuanlan.zhihu.com/p/94757947

2020-08-26 18:35:56 431

原创 pytorch torchtext.data.Field

https://blog.csdn.net/zwqjoy/article/details/86490098

2020-08-26 18:35:37 2546

原创 pytorch FC_classification

import torchimport torch.nn.functional as Fimport matplotlib.pyplot as plt# torch.manual_seed(1) # reproducible# make fake datan_data = torch.ones(100, 2)x0 = torch.normal(2*n_data, 1) # class0 x data (tensor), shape=(100, 2)y0 = torch.zer

2020-08-26 18:35:06 291

原创 pytorch FC_regression

import torchimport torch.nn.functional as Fimport matplotlib.pyplot as plt# torch.manual_seed(1) # reproducible# 生成数据x = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1) # x data (tensor), shape=(100, 1) ->(batch,dim)y = x.pow(2) + 0.2*torc

2020-08-26 18:34:46 144

原创 pytorch RNN_regression

import torchimport torch.nn as nnimport numpy as npimport matplotlib.pyplot as plt# show data# steps = np.linspace(0, np.pi*2, 100, dtype=np.float32) # 在[0,2*pi]区间内生成100个数,float32为转tensor方便# x_np = np.sin(steps) # 生成正弦函数# y_np = np.cos(steps) # 生成

2020-08-26 18:34:23 200

原创 pytorch torch.stack

概念APItorch.stack(tensors, dim=0, out=None) → Tensor

2020-08-25 20:21:30 137

原创 pytorch orchvision.transforms.Normalize

APICLASS torchvision.transforms.Normalize(mean, std, inplace=False)input需要为(C,H,W),一般会用ToTensor()预处理对每个channel,做正态化,input[channel] = (input[channel] - mean[channel]) / std[channel]参数描述mean (sequence)Sequence of means for each channel.std

2020-08-25 20:21:10 201

原创 pytorch torchvision.transforms.ToTensor

API将Shape=(HWC)的PIL.Image或者numpy.ndarray转换成shape=(CHW)的范围在[0.0,1.0]的torch.FloatTensor像素值范围从[0, 255]转成 [ .0 , 1.0 ]PIL Image 需要为 (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1)这几种模式之一np.ndarray需要 dtype = np.uint8上述两点不满足时,不进行像素值范围归一化(no scaling)CLASS tor

2020-08-25 20:20:49 336

原创 pytorch torchvision.transforms.Resize

APICLASS torchvision.transforms.Resize(size, interpolation=2)参数描述size (sequence or int)如果size是sequence(h,w),则输出为此尺寸:如果size是int,则暗比例缩放interpolation (int, optional)插值方法,默认是双线性参考:https://pytorch.org/docs/stable/torchvision/transforms.htm

2020-08-25 20:20:20 5050

原创 pytorch torchvision.transforms.CenterCrop

应用以图片中心进行裁剪import torchivision.transformsimport PIL.Image as Imageimport torchvision.transforms #读入图片image=Image.open("./test.png")crop_obj = torchvision.transforms.CenterCrop((224, 224)) #生成一个CenterCrop类的对象,用来将图片从中心裁剪成224*224image = crop_obj(i

2020-08-25 20:19:57 1997

原创 pytorch torchvision.transform.Compose

应用transforms.Compose([ transforms.CenterCrop(10), transforms.ToTensor(),])```# API将transforms组合起来```pythonCLASS torchvision.transforms.Compose(transforms)```参数|描述--|--transforms (list of Transform objects)|list of transforms to compose

2020-08-25 20:19:38 363

原创 TORCHVISION MODELS

随机weights创建modelimport torchvision.models as modelsresnet18 = models.resnet18()alexnet = models.alexnet()vgg16 = models.vgg16()squeezenet = models.squeezenet1_0()densenet = models.densenet161()inception = models.inception_v3()googlenet = models.goo

2020-08-24 20:15:21 337

原创 pytorch utils.model_zoo

应用APItorch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None)加载序列化的torch对象。如果对象已经存在model_dir,则直接加载。model_dir在<hub_dir>/checkpoints中,而hub_dir可以通过get_dir()获取。参数描述url (string

2020-08-24 20:13:49 1212

原创 pytorch utils.data.DataLoader

1.应用自动分batchAPICLASS torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, gen

2020-08-24 20:13:36 173

原创 pytorch WHAT IS TORCH.NN REALLY?

手动创建神经网络import torchimport torch.nn as nnfrom pathlib import Pathimport requestsimport pickleimport gzipfrom matplotlib import pyplotimport numpy as npimport math# 下载数据DATA_PATH = Path("data")PATH = DATA_PATH / "mnist"PATH.mkdir(parents=True

2020-08-24 20:13:27 246

原创 A Comprehensive Introduction to Torchtext

1.应用x = torch.tensor([1, 2, 3, 4])torch.unsqueeze(x, 0)------------------------tensor([[ 1, 2, 3, 4]])torch.unsqueeze(x, 1)-----------------------tensor([[ 1], [ 2], [ 3], [ 4]])2.API在指定的位置添加新的维度,新老tensor数据相同参数描述

2020-08-24 20:13:12 228

原创 Latex 导数相关符号

应用1.求导描述样式公式微分dy\mathrm{d} ydy$\mathrm{d} y$一阶dydx\frac{\mathrm{d} y }{\mathrm{d} x}dxdy​$\frac{\mathrm{d} y }{\mathrm{d} x}$n阶dnydxn\frac{\mathrm{d}^{n} y }{\mathrm{d} x^{n}}dxndny​$\frac{\mathrm{d} y }{\mathrm{d} x}$2.偏导描

2020-08-24 20:12:54 4538

原创 Latex 矩阵

1.输入矩阵使用\begin array ccc和\end array参数输入矩阵$$\begin{array}{c} 1 & 0 & 0\\\\ 0 & 1 & 0\\\\ 0 & 0 & 1\\\\\end{array}$$100010001\begin{array}{c} 1 & 0 & 0\\\\ 0 & 1 & 0\\\\ 0 & 0 &

2020-08-24 20:12:45 1778

原创 pytorch torch.nn.RNN

参考:https://www.cnblogs.com/dhName/p/11760610.html

2020-08-23 11:13:56 461

原创 LEARNING PYTORCH WITH EXAMPLES

以下面的神经网络为例:由损失函数(sse,平方差)是2*(y_pred-y)x与w乘积结果h,来反向求x梯度是:grad_h.dot(w.T)x与w成绩结果h,来反向求w梯度是:x.T.dot(grad_h)ReLU,反向题图是:grad_h_relu.copy()[h<0]=0https://www.geek-share.com/detail/2722319286.htmlhttps://www.cnblogs.com/alan-blog-TsingHua/p/9981522.html

2020-08-23 11:13:35 164

原创 docker pytorch

参考:https://zhuanlan.zhihu.com/p/76464450

2020-08-23 11:12:54 114

原创 pytorch optim.SGD

1.应用import torchimport torch.nn as nnoptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)optimizer.zero_grad()loss_fn(model(input), target).backward()optimizer.step()概念最简单的更新规则是Stochastic Gradient Descent (SGD):weight = weight -

2020-08-23 11:12:40 1928

原创 pytorch 创建神经网络

标准的训练神经网络流程是:1.定义包含权重的神经网络2.遍历所有输入数据3.处理数据4.计算损失值5.向前传播6.更新权重参考:https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#sphx-glr-beginner-blitz-neural-networks-tutorial-py...

2020-08-23 11:12:30 137

原创 pytorch nn.MSELoss

1.应用import torchimport torch.nn as nnloss = nn.MSELoss()input = torch.tensor([1.0,1.0], requires_grad=True)target = torch.tensor([7.0,9.0])output = loss(input, target) # 50output.backward()2.概念API类mean squared error (squared L2 norm)CLASS tor

2020-08-23 11:11:58 453

原创 pytorch Tensor autograd functions

属性方法grad默认为None,当调用backward()方法后,会保存gradient梯度requires_gradTrue时,表示该tensor需要计算gradients,添加到graph中is_leafrequires_grad=False都是leaf tensor;requires_grad=True时,如果是用户创建的,既没有经过计算的为leaf tensor,其余的都不是backward()detach()返回新的tensor,从当前的gra...

2020-08-23 11:11:45 124

原创 pytorch nn.Module.zero_grad

设置model parameters的gradients 为 0APIzero_grad() → None参考:https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.zero_grad

2020-08-23 11:11:32 494

原创 pytorch Tensor.backward

1.应用import torchimport torch.nn as nn# 1.全部为1x = torch.tensor([1.0,3.0], requires_grad = True) # tensor([1., 3.], requires_grad=True)y = x*x # tensor([1., 9.], grad_fn=<MulBackward0>)y.backward(torch.ones(2)) # 1*x[0] + 1*x[1]x.grad## 等价于x

2020-08-23 11:11:09 470

原创 pytorch nn.Module.parameters

返回模型的parameters的迭代对象。1.应用>>> for param in model.parameters():>>> print(type(param), param.size())<class 'torch.Tensor'> (20L,)<class 'torch.Tensor'> (20L, 1L, 5L, 5L)APIparameters(recurse: bool = True) → Iterator[t

2020-08-23 11:10:48 1232

原创 pytorch Tensor

torch.Tensor是torch.FloatTensor的简写形式:1.从list和numpy创建tensortorch.tensor([[1., -1.], [1., -1.]])torch.tensor(np.array([[1, 2, 3], [4, 5, 6]]))2.指定tensor数据类型和设备torch.zeros([2, 4], dtype=torch.int32)cuda0 = torch.device('cuda:0')torch.ones([2, 4], dtyp

2020-08-22 17:42:21 155

原创 torch nn.MaxPool2d

1.应用import torchimport torch.nn as nnm = nn.MaxPool2d(2)input = torch.randn(1, 1, 4, 4)output = m(input)API1.MaxPool2d类CLASS torch.nn.MaxPool2d(kernel_size: Union[T, Tuple[T, ...]], stride: Optional[Union[T, Tuple[T, ...]]] = None, padding: Union

2020-08-22 17:42:09 712

原创 pytorch nn.ReLU

Applies the rectified linear unit function element-wise:1.应用import torchimport torch.nn as nnm = nn.ReLU()input = torch.tensor([-1,1]) # tensor([ -1, 1])output = m(input) # tensor([0, 1])2.APIReLU类CLASS torch.nn.ReLU(inplace: bool = False)

2020-08-22 17:41:56 1328

原创 pytorch torchtext

连乘积1.应用import torchimport torch.nn as nna = torch.randn(1, 2, 3)torch.numel(a) # 1*2*3 = 6a = torch.zeros(4,4)torch.numel(a) # 4*4 = 16APItorch.numel(input) → int参数描述input (Tensor)the input tensor.参考:https://pytorch.org/docs/sta

2020-08-22 17:41:46 144

原创 Language modeling tutorial in torchtext

改变tensor的shape。返回的tensor必须包含相同的数据,相同的数量。import torchimport torch.nn as nnx = torch.randn(4, 4)x.size() # torch.Size([4, 4])y = x.view(16)y.size() # torch.Size([16])z = x.view(-1, 8)z.size() # torch.Size([2, 8]) -1表示此维度根据其他维度计算得来,2=16/8a = torch

2020-08-22 17:41:22 157

原创 pytorch nn.Linear

用于设置网络中的全连接层的,全连接层的输入与输出都是二维张量,一般形状为[batch_size, size],而卷积层的输入输出是四维张量[batch, channel , height , width]import torchimport torch.nn as nnm = nn.Linear(10, 20) # (inputfeatures,outputfeatures)input = torch.randn(1, 10) # (batch,features)output = m(input)

2020-08-22 17:41:08 144

原创 pytorch nn.Conv2d

import torchimport torch.nn as nnx = torch.randn(1, 1, 32, 32) # batch, channel , height , widthprint(x.shape) # torch.Size([1, 1, 32, 32])conv = nn.Conv2d(1, 1, (3, 3)) # in_channel, out_channel ,kennel_size,strideprint(conv) # Conv2d(1, 1, kernel_

2020-08-22 17:40:57 259

原创 pytorch nn.Conv1d

一维卷积nn.Conv1d一般用于文本数据1.应用import torchimport torch.nn as nnx = torch.randn(1, 1, 32) # batch, channel , widthprint(x.shape) # torch.Size([1, 1, 32])conv = nn.Conv1d(1, 1, 3) # in_channel, out_channel ,kennel_sizeprint(conv) # Conv1d(1, 1, kernel_si

2020-08-22 17:40:42 197 1

原创 pytorch AUTOGRAD

torch.Tensor实例中设置torch.Tensor=True将会记录所有的操作记录。当完成一系列计算后可以使用.backward()方法自动计算梯度。而这个梯度将会累加在.grad属性上。.detach()方法可以停止追踪历史,未来的计算也不会被记录。with torch.no_grad():可以实现.detach()。这对我们验证模型有很大的好处,因为在验证模型的时候,我们不需要计算梯度。1.1.创建import torchx = torch.ones(2, 2, requires_g

2020-08-22 17:40:28 107

原创 WHAT IS PYTORCH

pytorch是:1)可以使用GPU的Numpy2)深度学习的框架1.TensorsTensors类似于Numpy的ndarrays,区别在于Tensor可以使用GPU。创建5*3的矩阵,未初始化from __future__ import print_functionimport torchx = torch.empty(5, 3)print(x)----------------------------------------------tensor([[ 5.0375e+28,

2020-08-22 17:36:32 118

原创 mysql set

1.setSET @name = 43;SET @total_tax = (SELECT SUM(tax) FROM taxable_transactions);

2020-08-20 23:49:56 246

opencv_yolo3.part1.rar

opencv和yolo3的结合,因为需要用到权重等文件,比较大,所以分开打包,这是part1.

2019-08-20

opencv_yolo3.part2.rar

opencv和yolo3的结合,因为需要用到权重等文件,比较大,所以分开打包,这是part2.

2019-08-20

IDE直接和hadoop集群连接

hadoop可以使用IDE直接和集群连接,这样就可以直接在ide里进行测试了

2019-03-20

Win本地测试hadoop

Windows本地来测试hadoop的文件,方便进行快速的开发和迭代

2019-03-20

Linux提交hadoop任务

linux上传hadoop任务,总共包含了三个文件,分别是mapper,reducer,和jobsubmitter

2019-03-20

tesseract软件包

tesseract开发的工具包,包含了tesseract安装包,字体训练工具,以及一些验证码的样例

2018-10-10

java8安装包jdk-jre

-java8的开发环境

2018-10-10

文本挖掘资源

https://catalog.data.gov/dataset/consumer-complaint-database

2018-04-18

utf-8 unicode编码表

所有的utf-8 unicode编码,都可以在表里面查询,方便进行文本处理.

2018-03-26

linux tmux原版参考手册

tmux 英文参考手册

2017-06-14

iris数据集

博客里用到的数据集

2017-02-26

2016最新中国行政区划分

来源:国家统计局设管司 发布时间:2016-08-09 11:28 发布地址:http://www.stats.gov.cn/tjsj/tjbz/xzqhdm/201608/t20160809_1386477.html 解压密码:http://blog.csdn.net/claroja 如有其他问题可以给我留言,或者联系我的QQ:63183535,亲手制作!

2016-12-12

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除