tensorflow2.1,CUDA10.1 的 WIN10下安装

本文档详细记录了在Windows 10环境下,如何先安装PyTorch 1.2的GPU版本,再安装TensorFlow 2.1的GPU版。遇到CUDA驱动版本不匹配问题,通过卸载h5py并重新安装解决。此外,介绍了Markdown编辑器的多种新功能和快捷键。
摘要由CSDN通过智能技术生成

先安装pytorch1.2的GPU版本

我的CUDA驱动是10.1的,
先安装的是pytorch 版本
activate pytorch1.2
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
——————草鸡慢————,放弃了——

用清华的镜像
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch/
conda config --set show_channel_urls yes

conda create -n torch1.2 python=3.6
activate torch1.2

conda install pytorch1.2.0 torchvision0.4.0 cudatoolkit=10.0

成功了!!!
——————————————————————————————————————————————————

再安装tensorflow2.1的GPU版本

创建虚拟环境 conda create -n tf2.1 python3.7
conda install tensorflow-gpu
2.1.0

装完tensorflow2.1版本正常,
在命令行输入 import tensorflow as tf
就出现了header1.10.4和library1.10.5不匹配的问题
解决办法是
pip uninstall h5py
pip install h5py

进入python命令,输入 import tensorflow as tf
tf.version
‘2.1.0’

tf.test.is_gpu_available()

最终结果
在这里插入图片描述

欢迎使用Markdown编辑器

你好! 这是你第一次使用 Markdown编辑器 所展示的欢迎页。如果你想学习如何使用Markdown编辑器, 可以仔细阅读这篇文章,了解一下Markdown的基本语法知识。

新的改变

我们对Markdown编辑器进行了一些功能拓展与语法支持,除了标准的Markdown编辑器功能,我们增加了如下几点新功能,帮助你用它写博客:

  1. 全新的界面设计 ,将会带来全新的写作体验;
  2. 在创作中心设置你喜爱的代码高亮样式,Markdown 将代码片显示选择的高亮样式 进行展示;
  3. 增加了 图片拖拽 功能,你可以将本地的图片直接拖拽到编辑区域直接展示;
  4. 全新的 KaTeX数学公式 语法;
  5. 增加了支持甘特图的mermaid语法1 功能;
  6. 增加了 多屏幕编辑 Markdown文章功能;
  7. 增加了 焦点写作模式、预览模式、简洁写作模式、左右区域同步滚轮设置 等功能,功能按钮位于编辑区域与预览区域中间;
  8. 增加了 检查列表 功能。

功能快捷键

撤销:Ctrl/Command + Z
重做:Ctrl/Command + Y
加粗:Ctrl/Command + B
斜体:Ctrl/Command + I
标题:Ctrl/Command + Shift + H
无序列表:Ctrl/Command + Shift + U
有序列表:Ctrl/Command + Shift + O
检查列表:Ctrl/Command + Shift + C
插入代码:Ctrl/Command + Shift + K
插入链接:Ctrl/Command + Shift + L
插入图片:Ctrl/Command + Shift + G
查找:Ctrl/Command + F
替换:Ctrl/Command + G

合理的创建标题,有助于目录的生成

直接输入1次#,并按下space后,将生成1级标题。
输入2次#,并按下space后,将生成2级标题。
以此类推,我们支持6级标题。有助于使用TOC语法后生成一个完美的目录。

如何改变文本的样式

强调文本 强调文本

加粗文本 加粗文本

标记文本

删除文本

引用文本

H2O is是液体。

210 运算结果是 1024.

插入链接与图片

链接: link.

图片: Alt

带尺寸的图片: Alt

居中的图片: Alt

居中并且带尺寸的图片: Alt

当然,我们为了让用户更加便捷,我们增加了图片拖拽功能。

如何插入一段漂亮的代码片

博客设置页面,选择一款你喜欢的代码片高亮样式,下面展示同样高亮的 代码片.

// An highlighted block
var foo = 'bar';

生成一个适合你的列表

  • 项目
    • 项目
      • 项目
  1. 项目1
  2. 项目2
  3. 项目3
  • 计划任务
  • 完成任务

创建一个表格

一个简单的表格是这么创建的:

项目 Value
电脑 $1600
手机 $12
导管 $1

设定内容居中、居左、居右

使用:---------:居中
使用:----------居左
使用----------:居右

第一列 第二列 第三列
第一列文本居中 第二列文本居右 第三列文本居左

这篇作为第一篇,讲yolov3基本原理.
卷积后的输出
经过basenet(darknet-53)不断的卷积以后得到一个feature map. 我们就用这个feature map来做预测.
比方说原始输入是4164163,一通卷积以后得到一个1313depth的feature map.
这个feature map的每一个cell都有其对应的感受野.(简单滴说:即当前cell的值受到原始图像的哪些pixel的影响).所以现在我们假设每个cell可以预测出一个boundingbox.boudingbox所框出的object的正中心落于当前cell.
You expect each cell of the feature map to predict an object through one of it’s bounding boxes if the center of the object falls in the receptive field of that cell. (Receptive field is the region of the input image visible to the cell. Refer to the link on convolutional neural networks for further clarification).

比如上图的红色cell负责预测狗这个object.
feature map的size为NNDepth,其中Depth=(B x (5 + C))

B指每个cell预测几个boundingbox. 5=4+1. 4代表用于预测boudingbox的四个值,1代表object score,代表这个boundingbox包含目标的概率,C代表要预测的类别个数.
如何计算predicted box的坐标
Anchor Boxes
anchor box是事先聚类出来的一组值.可以理解为最接近现实的object的宽,高.
yolov3中feature map的每一个cell都预测出3个bounding box.但是只选用与ground truth box的IOU最大的做预测.
预测

bx, by, bw, bh are the x,y center co-ordinates, width and height of our prediction. tx, ty, tw, th is what the network outputs. cx and cy are the top-left co-ordinates of the grid. pw and ph are anchors dimensions for the box.
bx by bw bh是预测值 代表预测的bouding box的中心点坐标 宽 高
tx, ty, tw, th 是卷积得到的feature map在depth方向的值
cx,cy是当前cell左上角坐标
pw,ph是事先聚类得到的anchors值
上图中的σ(tx)是sigmoid函数,以确保它的值在0-1之间.这样才能确保预测出来的坐标坐落在当前cell内.比如cell左上角是(6,6),center算出来是(0.4,0.7),那么预测的boudingbox的中心就是(6.4,6.7),如果算出来center是(1.2,0.7),那boundingbox的中心就落到了(7.2,6.7)了,就不再是当前cell了,这与我们的假设是相悖的.(我们假设当前cell是它负责预测的object的中心).

objectness score
这个也是由sigmoid限制到0-1之间,表示包含一个object的概率.
Class Confidences
表示当前object属于某一个class的概率. yolov3不再使用softmax得到.因为softmax默认是排他的.即一个object属于class1,就不可能属于class2. 但实际上一个object可能既属于women又属于person.
多尺度检测
yolov3借鉴了特征金字塔的概念,引入了多尺度检测,使得对小目标检测效果更好.
以416416为例,一系列卷积以后得到1313的feature map.这个feature map有比较丰富的语义信息,但是分辨率不行.所以通过upsample生成2626,5252的feature map,语义信息损失不大,分辨率又提高了,从而对小目标检测效果更好.

对416 x 416, 预测出((52 x 52) + (26 x 26) + 13 x 13)) x 3 = 10647个bounding boxes.通过object score排序,滤掉score过低的,再通过nms逐步确定最终的bounding box.
nms解释看下这个https://blog.csdn.net/zchang81/article/details/70211851.
简单滴说就是每一轮都标记出一个score最高的,把和最高的这个box类似的box去掉,循环反复,最终就得到了最终的box.

配置文件
配置文件yolov3.cfg定义了网络的结构

[convolutional]
batch_normalize=1
filters=64
size=3
stride=2
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=32
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[shortcut]
from=-3
activation=linear


配置文件描述了model的结构.
yolov3 layer
yolov3有以下几种结构
Convolutional
Shortcut
Upsample
Route
YOLO
Convolutional
[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky
Shortcut
[shortcut]
from=-3
activation=linear
类似于resnet,用以加深网络深度.上述配置的含义是shortcut layer的输出是前一层和前三层的输出的叠加.
resnet skip connection解释详细见https://zhuanlan.zhihu.com/p/28124810
Upsample
[upsample]
stride=2
通过双线性插值法将NN的feature map变为(strideN) * (stride*N)的feature map.模仿特征金字塔,生成多尺度feature map.加强小目标检测效果.
Route
[route]
layers = -4

[route]
layers = -1, 61
以上述配置为例:
当layers只有一个值,代表route layer输出的是router layer - 4那一层layer的feature map.
当layers有2个值时,代表route layer的输出为route layer -1和第61 layer的feature map在深度方向连接起来.(比如说33100,33200add起来变成33300)
yolo
[yolo]
mask = 0,1,2
anchors = 10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326
classes=80
num=9
jitter=.3
ignore_thresh = .5
truth_thresh = 1
random=1
yolo层负责预测. anchors是9个anchor,事先聚类得到,表示最有可能的anchor形状.
mask表示哪几组anchor被使用.比如mask=0,1,2代表使用10,13 16,30 30,61这几组anchor. 在原理篇里说过了,每个cell预测3个boudingbox. 三种尺度,总计9种.
Net
[net]

Testing

batch=1
subdivisions=1

Training

batch=64

subdivisions=16

width= 320
height = 320
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1
定义了model的输入,batch等等.
现在开始写代码:
解析配置文件
这一步里,做配置文件的解析.把每一块的配置内容存储于一个dict.
def parse_cfg(cfgfile):
“”"
Takes a configuration file

Returns a list of blocks. Each blocks describes a block in the neural
network to be built. Block is represented as a dictionary in the list

"""
file = open(cfgfile, 'r')
# store the lines in a list
lines = file.read().split('\n')
# get read of the empty lines
lines = [x for x in lines if len(x) > 0]
lines = [x for x in lines if x[0] != '#']              # get rid of comments
# get rid of fringe whitespaces
lines = [x.rstrip().lstrip() for x in lines]

block = {}
blocks = []

for line in lines:
	if line[0] == "[":               # This marks the start of a new block
		# If block is not empty, implies it is storing values of previous block.
		if len(block) != 0:
			blocks.append(block)     # add it the blocks list
			block = {}               # re-init the block
		block["type"] = line[1:-1].rstrip()
	else:
		key, value = line.split("=")
		block[key.rstrip()] = value.lstrip()
blocks.append(block)

return blocks

用pytorch创建各个layer
逐个layer创建.
def create_modules(blocks):
# Captures the information about the input and pre-processing
net_info = blocks[0]
module_list = nn.ModuleList()
prev_filters = 3 #卷积的时候需要知道卷积核的depth.卷积核的size在配置文件里定义了.depeth就是上一层的output的depth.
output_filters = [] #用以保存每一个layer的输出的feature map

#index代表了当前layer位于网络的第几层
for index, x in enumerate(blocks[1:]):
    #生成每一个layer
    
    module_list.append(module)
    prev_filters = filters
    output_filters.append(filters)

return(net_info,module_list)

卷积层
[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky
除了卷积之外实际上还包括了bn和leaky.batchnormalize基本成了标配了现在,用来解决梯度消失的问题(反向传播梯度越乘越小).leaky是激活函数RLU.
所以用到了nn.Sequential()
module = nn.Sequential()
module.add_module(“conv_{0}”.format(index), conv)
module.add_module(“batch_norm_{0}”.format(index), bn)
module.add_module(“leaky_{0}”.format(index), activn)
卷积层创建完整代码
涉及到一个python语法enumerate. 就是为一个list中的每个元素添加一个index,形成新的list.

seasons = [‘Spring’, ‘Summer’, ‘Fall’, ‘Winter’]
list(enumerate(seasons))
[(0, ‘Spring’), (1, ‘Summer’), (2, ‘Fall’), (3, ‘Winter’)]

list(enumerate(seasons, start=1)) # 下标从 1 开始
[(1, ‘Spring’), (2, ‘Summer’), (3, ‘Fall’), (4, ‘Winter’)]
卷积层创建
#index代表了当前layer位于网络的第几层
for index, x in enumerate(blocks[1:]):
module = nn.Sequential()

	#check the type of block
	#create a new module for the block
	#append to module_list

	if (x["type"] == "convolutional"):
        #Get the info about the layer
        activation = x["activation"]
        try:
            batch_normalize = int(x["batch_normalize"])
            bias = False
        except:
            batch_normalize = 0
            bias = True

        filters= int(x["filters"])
        padding = int(x["pad"])
        kernel_size = int(x["size"])
        stride = int(x["stride"])

        if padding:
            pad = (kernel_size - 1) // 2
        else:
            pad = 0

        #Add the convolutional layer
        #prev_filters是上一层输出的feature map的depth.比如上层有64个卷积核,则输出为m*n*64
        conv = nn.Conv2d(prev_filters, filters, kernel_size, stride, pad, bias = bias)
        module.add_module("conv_{0}".format(index), conv)

        #Add the Batch Norm Layer
        if batch_normalize:
            bn = nn.BatchNorm2d(filters)
            module.add_module("batch_norm_{0}".format(index), bn)

        #Check the activation. 
        #It is either Linear or a Leaky ReLU for YOLO
        if activation == "leaky":
            activn = nn.LeakyReLU(0.1, inplace = True)
            module.add_module("leaky_{0}".format(index), activn)

upsample层
#If it’s an upsampling layer
#We use Bilinear2dUpsampling
elif (x[“type”] == “upsample”):
stride = int(x[“stride”])
upsample = nn.Upsample(scale_factor = 2, mode = “bilinear”)
module.add_module(“upsample_{}”.format(index), upsample)
route层
[route]
layers = -4

[route]
layers = -1, 61
首先是解析配置文件,然后将相应层的feature map 连接起来作为输出
#If it is a route layer
elif (x[“type”] == “route”):
x[“layers”] = x[“layers”].split(’,’)
#Start of a route
start = int(x[“layers”][0])
#end, if there exists one.
try:
end = int(x"layers")
except:
end = 0
#Positive anotation
if start > 0:
start = start - index #start转换成相对于当前layer的偏移
if end > 0:
end = end - index #end转换成相对于当前layer的偏移
route = EmptyLayer()
module.add_module(“route_{0}”.format(index), route)
if end < 0: #route层concat当前layer前面的某2个layer,所以index>0是无意义的.
filters = output_filters[index + start] + output_filters[index + end]
else:
filters= output_filters[index + start]
这里我们自定义了一个EmptyLayer
class EmptyLayer(nn.Module):
def init(self):
super(EmptyLayer, self).init()
这里定义EmptyLayer是为了代码的简便起见.在pytorch里定义一个自定义的layer.要写一个类,继承自nn.Module,然后实现forward方法.
关于如何定义一个自定义layer,参见下面的link.
https://pytorch.org/tutorials/beginner/examples_nn/two_layer_net_module.html
import torch

class TwoLayerNet(torch.nn.Module):
def init(self, D_in, H, D_out):
“”"
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
“”"
super(TwoLayerNet, self).init()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)

def forward(self, x):
    """
    In the forward function we accept a Tensor of input data and we must return
    a Tensor of output data. We can use Modules defined in the constructor as
    well as arbitrary operators on Tensors.
    """
    h_relu = self.linear1(x).clamp(min=0)
    y_pred = self.linear2(h_relu)
    return y_pred

N is batch size; D_in is input dimension;

H is hidden dimension; D_out is output dimension.

N, D_in, H, D_out = 64, 1000, 100, 10

Create random Tensors to hold inputs and outputs

x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

Construct our model by instantiating the class defined above

model = TwoLayerNet(D_in, H, D_out)

Construct our loss function and an Optimizer. The call to model.parameters()

in the SGD constructor will contain the learnable parameters of the two

nn.Linear modules which are members of the model.

criterion = torch.nn.MSELoss(reduction=‘sum’)
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(500):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)

# Compute and print loss
loss = criterion(y_pred, y)
print(t, loss.item())

# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
los
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
自编译tensorflow: 1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.支持mkl,无MPI; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]:/home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: hp@dla:~/work/ts_compile/tensorflow$ bazel build --config=opt --config=mkl --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl
自编译tensorflow: 1.python3.5,tensorflow1.12; 2.支持cuda10.0,cudnn7.3.1,TensorRT-5.0.2.6-cuda10.0-cudnn7.3; 3.无mkl支持; 软硬件硬件环境:Ubuntu16.04,GeForce GTX 1080 TI 配置信息: hp@dla:~/work/ts_compile/tensorflow$ ./configure WARNING: --batch mode is deprecated. Please instead explicitly shut down your Bazel server using the command "bazel shutdown". You have bazel 0.19.1 installed. Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3 Found possible Python library paths: /usr/local/lib/python3.5/dist-packages /usr/lib/python3/dist-packages Please input the desired Python library path to use. Default is [/usr/local/lib/python3.5/dist-packages] Do you wish to build TensorFlow with XLA JIT support? [Y/n]: XLA JIT support will be enabled for TensorFlow. Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: No OpenCL SYCL support will be enabled for TensorFlow. Do you wish to build TensorFlow with ROCm support? [y/N]: No ROCm support will be enabled for TensorFlow. Do you wish to build TensorFlow with CUDA support? [y/N]: y CUDA support will be enabled for TensorFlow. Please specify the CUDA SDK version you want to use. [Leave empty to default to CUDA 10.0]: Please specify the location where CUDA 10.0 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]: /usr/local/cuda-10.0 Please specify the cuDNN version you want to use. [Leave empty to default to cuDNN 7]: 7.3.1 Please specify the location where cuDNN 7 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda-10.0]: Do you wish to build TensorFlow with TensorRT support? [y/N]: y TensorRT support will be enabled for TensorFlow. Please specify the location where TensorRT is installed. [Default is /usr/lib/x86_64-linux-gnu]://home/hp/bin/TensorRT-5.0.2.6-cuda10.0-cudnn7.3/targets/x86_64-linux-gnu Please specify the locally installed NCCL version you want to use. [Default is to use https://github.com/nvidia/nccl]: Please specify a list of comma-separated Cuda compute capabilities you want to build with. You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Please note that each additional compute capability significantly increases your build time and binary size. [Default is: 6.1,6.1,6.1]: Do you want to use clang as CUDA compiler? [y/N]: nvcc will be used as CUDA compiler. Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]: Do you wish to build TensorFlow with MPI support? [y/N]: No MPI support will be enabled for TensorFlow. Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]: Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=" to your build command. See .bazelrc for more details. --config=mkl # Build with MKL support. --config=monolithic # Config for mostly static monolithic build. --config=gdr # Build with GDR support. --config=verbs # Build with libverbs support. --config=ngraph # Build with Intel nGraph support. --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. Preconfigured Bazel build configs to DISABLE default on features: --config=noaws # Disable AWS S3 filesystem support. --config=nogcp # Disable GCP support. --config=nohdfs # Disable HDFS support. --config=noignite # Disable Apacha Ignite support. --config=nokafka # Disable Apache Kafka support. --config=nonccl # Disable NVIDIA NCCL support. Configuration finished 编译: bazel build --config=opt --verbose_failures //tensorflow/tools/pip_package:build_pip_package 卸载已有tensorflow: hp@dla:~/temp$ sudo pip3 uninstall tensorflow 安装自己编译的成果: hp@dla:~/temp$ sudo pip3 install tensorflow-1.12.0-cp35-cp35m-linux_x86_64.whl
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值