tensortAPI的learning

1.卷积操作
IConvolutionLayer *conv1=network->addConvolutionNd(ITensor& input, int32_t nbOutputMaps, DimsHW kernelSize,
        Weights kernelWeights, Weights biasWeights)

输入参数:input  Itensor类型,
nbOutputMaps输出的通道数,
kernelSize卷积核大小,
kernelWeights 权重大小,Weights类型
biasWeights 偏执 ,Weights类型 ,可以为0,bias为0时写成: Weights emptywts{DataType::kFLOAT, nullptr, 0}; 

然后要设置 padding 和stride 

 conv1->setStrideNd(DimsHW{1, 1});

 conv1->setPaddingNd(DimsHW{0, 0});

2.激活层操作

network->addActivation(input ,act_type);

输入参数:input Itensor类型,

      act_type 激活函数的类型  ActivationType 类型的枚举变量

enum class ActivationType : int32_t
{
    kRELU = 0,             //!< Rectified linear activation.
    kSIGMOID = 1,          //!< Sigmoid activation.
    kTANH = 2,             //!< TanH activation.
    kLEAKY_RELU = 3,       //!< LeakyRelu activation: x>=0 ? x : alpha * x.
    kELU = 4,              //!< Elu activation: x>=0 ? x : alpha * (exp(x) - 1).
    kSELU = 5,             //!< Selu activation: x>0 ? beta * x : beta * (alpha*exp(x) - alpha)
    kSOFTSIGN = 6,         //!< Softsign activation: x / (1+|x|)
    kSOFTPLUS = 7,         //!< Parametric softplus activation: alpha*log(exp(beta*x)+1)
    kCLIP = 8,             //!< Clip activation: max(alpha, min(beta, x))
    kHARD_SIGMOID = 9,     //!< Hard sigmoid activation: max(0, min(1, alpha*x+beta))
    kSCALED_TANH = 10,     //!< Scaled tanh activation: alpha*tanh(beta*x)
    kTHRESHOLDED_RELU = 11 //!< Thresholded ReLU activation: x>alpha ? x : 0
};

3.池化层操作

    IPoolingLayer *poolLayer = network->addPoolingNd(input,p_type,kernel);

      poolLayer->setStrideNd(stride);

    poolLayer->setPaddingNd(padding);

 4.元素与操作,(通常限定在两个Itensor之间的op,如加减乘除)

network->addElementWise(input1, input12, ElementWiseOperation::kSUM);//两个张量对应位置元素相加

注:两个张量的维度必须严格一致!

enum class ElementWiseOperation : int32_t
{
    kSUM = 0,       //!< Sum of the two elements.
    kPROD = 1,      //!< Product of the two elements.
    kMAX = 2,       //!< Maximum of the two elements.
    kMIN = 3,       //!< Minimum of the two elements.
    kSUB = 4,       //!< Substract the second element from the first.
    kDIV = 5,       //!< Divide the first element by the second.
    kPOW = 6,       //!< The first element to the power of the second element.
    kFLOOR_DIV = 7, //!< Floor division of the first element by the second.
    kAND = 8,       //!< Logical AND of two elements.
    kOR = 9,        //!< Logical OR of two elements.
    kXOR = 10,      //!< Logical XOR of two elements.
    kEQUAL = 11,    //!< Check if two elements are equal.
    kGREATER = 12,  //!< Check if element in first tensor is greater than corresponding element in second tensor.
    kLESS = 13      //!< Check if element in first tensor is less than corresponding element in second tensor.
};

5.softmax层

network->addSoftMax(input);

7.全连接层操作

network->aaddFullyConnected(input ,out_channel, weight, bias);

6.conact操作 (nn.concat())

//两个tensor做concat

network->addConcatenation(x1,x2)

//多个tensor做concat

 ITensor *inputTensors[] = {x1,x2,x3}; 

  IConcatenationLayer *concat_layer= network->addConcatenation(inputTensors, 3);

6. reshape操作

    IShuffleLayer *x_shuffe = network->addShuffle(input); //input(2,256)

    x_shuffe ->setReshapeDimensions(Dims2{1, 512}); //[1,512]   //reshape

    char_onehots_tem->setSecondTranspose(Permutation{1, 0}); //[512,1] 改变维度排列顺序

 

7. 张量点乘操作

network->addMatrixMultiply(input_m,MatrixOp ,input_n,MatrixOp)

输入参数:input_m  张量M 对应的MatrixOp 是否转置;

                  input_n  张量N 对应的MatrixOp 是否转置     

        MatrixOp 为MatrixOperation类型的枚举变量,kNONE是不做任何操作,kTRANSPOSE是转置

注意:M和最后一个维度必须等于N的第一个维度,和向量的点乘规则一样

举个例子,M的shape是(3*5) ,N的shape是(5*7),则M*N的结果是(3*7)

8.打印输出Itensor类型变量的的shape

  for (int i = 0; i < input_tensor->getDimensions().nbDims; i++)
    {
        std::cout << input_tensor->getDimensions().d[i] << ",";
    }

9. 切片操作 (类似于tf.split(). np.split())

network->addSlice( input,start, size, stride)

输入参数:input ITensor类型

           start  开始维度

            size 要切片的大小 

            stride 滑窗大小

举个例子,如果输入的input 大小是(1,768)

如果我要想3等分切片出3个(1,256)的tensor

    std::vector<int64_t> stride_vec = {1, 1};

    std::vector<int64_t> offset0 = {0, 0};

    std::vector<int64_t> offset1 = {0, 256};

    std::vector<int64_t> offset2 = {0, 2 * 256};

    auto size   = toDims({1,256});

 

    auto stride = toDims({1,1});

    auto  x_0= network->addSlice(*input , toDims(offset0),  size, stride); //[1,256]=[1,0~256] 

    auto x_1 = network->addSlice(*input , toDims(offset1),  size, stride); //[1,256] [1,256~512] 

    auto x_2 = network->addSlice(*input  ,toDims(offset2),  size, stride); //[1,256]  [1,512~768] 

                        

10.Weights类型转tensorRT的常量

network->addConstant(dimension,weights)

输入参数:dimension Dims类型;

                    weights  Weights类型,其维度必须和指定的dimension 一致;

输出是和weights 大小一致的常量层,可以在tensorRT的参与Itensor的op;

11.upsample操作

未完待续

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值