如何查看TensorRT默认支持的算子(operator)

  • TensorRT7.0支持的ONNX算子列表

https://github.com/onnx/onnx-tensorrt/blob/84b5be1d6fc03564f2c0dba85a2ee75bad242c2e/oper
ators.md

 

OperatorSupported?Restrictions
AbsY
AcosY
AcoshY
AddY
AndY
ArgMaxY
ArgMinY
AsinY
AsinhY
AtanY
AtanhY
AveragePoolY2D or 3D Pooling only
BatchNormalizationY
BitShiftN
CastYCast is only supported for TRT types
CeilY
ClipYmin and max clip values must be an initializer
CompressN
ConcatY
ConcatFromSequence N
ConstantY
ConstantOfShapeY
ConvY2D or 3D convolutions only
ConvIntegerN
ConvTransposeY2D or 3D deconvolutions only. Weights must be an initializer
CosY
CoshY
CumSumN
DepthToSpaceY
DequantizeLinearYScales and zero-point value must be initializers
DetN
DivY
DropoutN
EluY
EqualY
ErfY
ExpY
ExpandY
EyeLikeN
FlattenY
FloorY
GatherY
GatherElementsN
GatherNDN
GemmY
GlobalAveragePoolY
GlobalLpPoolN
GlobalMaxPoolY
GreaterY
GRUY
HardSigmoidY
HardmaxN
IdentityY
IfN
ImageScalerY
InstanceNormalizationYScales and biases must be an initializer
IsInfN
IsNaNN
LeakyReluY
LessY
LogY
LogSoftmaxY
LoopY
LRNY
LSTMY
LpNormalizationN
LpPoolN
MatMulY
MatMulIntegerN
MaxY
MaxPoolY
MaxRoiPoolN
MaxUnpoolN
MeanY
MinY
ModN
MulY
MultinomialN
NegY
NonMaxSuppressionN
NonZeroN
NotY
OneHotN
OrY
PadYZero-padding on last 2 dimensions only
ParametricSoftplusY
PowY
PReluY
QLinearConvN
QLinearMatMulN
QuantizeLinearYScales and zero-point value must be initializers
RNNN
RandomNormalN
RandomNormalLikeN
RandomUniformY
RandomUniformLikeY
RangeYFloat inputs are only supported if start, limit and delta inputs are initializers
ReciprocalN
ReduceL1Y
ReduceL2Y
ReduceLogSumY
ReduceLogSumExpY
ReduceMaxY
ReduceMeanY
ReduceMinY
ReduceProdY
ReduceSumY
ReduceSumSquareY
ReluY
ReshapeY
ResizeYAsymmetric coordinate transformation mode only. Nearest or Linear resizing mode only. "floor" mode only for resize_mode attribute.
ReverseSequenceN
RNNY
RoiAlignN
RoundN
ScaledTanhY
ScanY
ScatterN
ScatterElementsN
ScatterNDN
SeluY
SequenceAtN
SequenceConstructN
SequenceEmptyN
SequenceEraseN
SequenceInsertN
SequenceLengthN
ShapeY
ShrinkN
SigmoidY
SignN
SinY
SinhY
SizeY
SliceYSlice axes must be an initializer
SoftmaxY
SoftplusY
SoftsignY
SpaceToDepthY
SplitY
SplitToSequenceN
SqrtY
SqueezeY
StringNormalizerN
SubY
SumY
TanY
TanhY
TfIdfVectorizerN
ThresholdedReluY
TileY
TopKY
TransposeY
UniqueN
UnsqueezeY
UpsampleY
WhereY
XorN

  • TensorRT8.2支持的ONNX算子列表

onnx-tensorrt/operators.md at main · onnx/onnx-tensorrt · GitHub

OperatorSupportedSupported TypesRestrictions
AbsYFP32, FP16, INT32
AcosYFP32, FP16
AcoshYFP32, FP16
AddYFP32, FP16, INT32
AndYBOOL
ArgMaxYFP32, FP16
ArgMinYFP32, FP16
AsinYFP32, FP16
AsinhYFP32, FP16
AtanYFP32, FP16
AtanhYFP32, FP16
AveragePoolYFP32, FP16, INT8, INT322D or 3D Pooling only
BatchNormalizationYFP32, FP16
BitShiftN
CastYFP32, FP16, INT32, INT8, BOOL
CeilYFP32, FP16
CeluYFP32, FP16
ClipYFP32, FP16, INT8
CompressN
ConcatYFP32, FP16, INT32, INT8, BOOL
ConcatFromSequenceN
ConstantYFP32, FP16, INT32, INT8, BOOL
ConstantOfShapeYFP32
ConvYFP32, FP16, INT82D or 3D convolutions only. Weights W must be an initailizer
ConvIntegerN
ConvTransposeYFP32, FP16, INT82D or 3D deconvolutions only. Weights W must be an initializer
CosYFP32, FP16
CoshYFP32, FP16
CumSumYFP32, FP16axis must be an initializer
DepthToSpaceYFP32, FP16, INT32
DequantizeLinearYINT8x_zero_point must be zero
DetN
DivYFP32, FP16, INT32
DropoutYFP32, FP16
DynamicQuantizeLinearN
EinsumYFP32, FP16Ellipsis and diagonal operations are not supported. Broadcasting between inputs is not supported
EluYFP32, FP16, INT8
EqualYFP32, FP16, INT32
ErfYFP32, FP16
ExpYFP32, FP16
ExpandYFP32, FP16, INT32, BOOL
EyeLikeYFP32, FP16, INT32, BOOL
FlattenYFP32, FP16, INT32, BOOL
FloorYFP32, FP16
GatherYFP32, FP16, INT8, INT32
GatherElementsYFP32, FP16, INT8, INT32
GatherNDYFP32, FP16, INT8, INT32
GemmYFP32, FP16, INT8
GlobalAveragePoolYFP32, FP16, INT8
GlobalLpPoolYFP32, FP16, INT8
GlobalMaxPoolYFP32, FP16, INT8
GreaterYFP32, FP16, INT32
GreaterOrEqualYFP32, FP16, INT32
GRUYFP32, FP16For bidirectional GRUs, activation functions must be the same for both the forward and reverse pass
HardSigmoidYFP32, FP16, INT8
HardmaxN
IdentityYFP32, FP16, INT32, INT8, BOOL
IfYFP32, FP16, INT32, BOOLOutput tensors of the two conditional branches must have broadcastable shapes, and must have different names
ImageScalerYFP32, FP16
InstanceNormalizationYFP32, FP16Scales scale and biases B must be initializers. Input rank must be >=3 & <=5
IsInfN
IsNaNYFP32, FP16, INT32
LeakyReluYFP32, FP16, INT8
LessYFP32, FP16, INT32
LessOrEqualYFP32, FP16, INT32
LogYFP32, FP16
LogSoftmaxYFP32, FP16
LoopYFP32, FP16, INT32, BOOL
LRNYFP32, FP16
LSTMYFP32, FP16For bidirectional LSTMs, activation functions must be the same for both the forward and reverse pass
LpNormalizationYFP32, FP16
LpPoolYFP32, FP16, INT8
MatMulYFP32, FP16
MatMulIntegerN
MaxYFP32, FP16, INT32
MaxPoolYFP32, FP16, INT82D or 3D pooling only. Indices output tensor unsupported
MaxRoiPoolN
MaxUnpoolN
MeanYFP32, FP16, INT32
MeanVarianceNormalizationN
MinYFP32, FP16, INT32
ModN
MulYFP32, FP16, INT32
MultinomialN
NegYFP32, FP16, INT32
NegativeLogLikelihoodLossN
NonMaxSuppressionY [EXPERIMENTAL]FP32, FP16Inputs max_output_boxes_per_class, iou_threshold, and score_threshold must be initializers. Output has fixed shape and is padded to [max_output_boxes_per_class, 3].
NonZeroN
NotYBOOL
OneHotN
OrYBOOL
PadYFP32, FP16, INT8, INT32
ParametricSoftplusYFP32, FP16, INT8
PowYFP32, FP16
PReluYFP32, FP16, INT8
QLinearConvN
QLinearMatMulN
QuantizeLinearYFP32, FP16y_zero_point must be 0
RandomNormalN
RandomNormalLikeN
RandomUniformYFP32, FP16seed value is ignored by TensorRT
RandomUniformLikeYFP32, FP16seed value is ignored by TensorRT
RangeYFP32, FP16, INT32Floating point inputs are only supported if start, limit, and delta inputs are initializers
ReciprocalN
ReduceL1YFP32, FP16
ReduceL2YFP32, FP16
ReduceLogSumYFP32, FP16
ReduceLogSumExpYFP32, FP16
ReduceMaxYFP32, FP16
ReduceMeanYFP32, FP16
ReduceMinYFP32, FP16
ReduceProdYFP32, FP16
ReduceSumYFP32, FP16
ReduceSumSquareYFP32, FP16
ReluYFP32, FP16, INT8
ReshapeYFP32, FP16, INT32, INT8, BOOL
ResizeYFP32, FP16Supported resize transformation modes: half_pixel, pytorch_half_pixel, tf_half_pixel_for_nn, asymmetric, and align_corners.
Supported resize modes: nearest, linear.
Supported nearest modes: floor, ceil, round_prefer_floor, round_prefer_ceil
ReverseSequenceYFP32, FP16Dynamic input shapes are unsupported
RNNYFP32, FP16For bidirectional RNNs, activation functions must be the same for both the forward and reverse pass
RoiAlignN
RoundYFP32, FP16, INT8
ScaledTanhYFP32, FP16, INT8
ScanYFP32, FP16
ScatterYFP32, FP16, INT8, INT32
ScatterElementsYFP32, FP16, INT8, INT32
ScatterNDYFP32, FP16, INT8, INT32
SeluYFP32, FP16, INT8
SequenceAtN
SequenceConstructN
SequenceEmptyN
SequenceEraseN
SequenceInsertN
SequenceLengthN
ShapeYFP32, FP16, INT32, INT8, BOOL
ShrinkN
SigmoidYFP32, FP16, INT8
SignYFP32, FP16, INT8, INT32
SinYFP32, FP16
SinhYFP32, FP16
SizeYFP32, FP16, INT32, INT8, BOOL
SliceYFP32, FP16, INT32, INT8, BOOLaxes must be an initializer
SoftmaxYFP32, FP16
SoftmaxCrossEntropyLossN
SoftplusYFP32, FP16, INT8
SoftsignYFP32, FP16, INT8
SpaceToDepthYFP32, FP16, INT32
SplitYFP32, FP16, INT32, BOOL
SplitToSequenceN
SqrtYFP32, FP16
SqueezeYFP32, FP16, INT32, INT8, BOOLaxes must be an initializer
StringNormalizerN
SubYFP32, FP16, INT32
SumYFP32, FP16, INT32
TanYFP32, FP16
TanhYFP32, FP16, INT8
TfIdfVectorizerN
ThresholdedReluYFP32, FP16, INT8
TileYFP32, FP16, INT32, BOOL
TopKYFP32, FP16K input must be an initializer
TransposeYFP32, FP16, INT32, INT8, BOOL
UniqueN
UnsqueezeYFP32, FP16, INT32, INT8, BOOLaxes must be a constant tensor
UpsampleYFP32, FP16
WhereYFP32, FP16, INT32, BOOL
XorN
  • 2022-4-5 支持的ONNX算子列表

https://github.com/onnx/onnx/blob/main/docs/Operators.md

OperatorSince version
Abs13, 6, 1
Acos7
Acosh9
Add14, 13, 7, 6, 1
And7, 1
ArgMax13, 12, 11, 1
ArgMin13, 12, 11, 1
Asin7
Asinh9
Atan7
Atanh9
AveragePool11, 10, 7, 1
BatchNormalization15, 14, 9, 7, 6, 1
BitShift11
Cast13, 9, 6, 1
Ceil13, 6, 1
Clip13, 12, 11, 6, 1
Compress11, 9
Concat13, 11, 4, 1
ConcatFromSequence11
Constant13, 12, 11, 9, 1
ConstantOfShape9
Conv11, 1
ConvInteger10
ConvTranspose11, 1
Cos7
Cosh9
CumSum14, 11
DepthToSpace13, 11, 1
DequantizeLinear13, 10
Det11
Div14, 13, 7, 6, 1
Dropout13, 12, 10, 7, 6, 1
Einsum12
Elu6, 1
Equal13, 11, 7, 1
Erf13, 9
Exp13, 6, 1
Expand13, 8
EyeLike9
Flatten13, 11, 9, 1
Floor13, 6, 1
GRU14, 7, 3, 1
Gather13, 11, 1
GatherElements13, 11
GatherND13, 12, 11
Gemm13, 11, 9, 7, 6, 1
GlobalAveragePool1
GlobalLpPool2, 1
GlobalMaxPool1
Greater13, 9, 7, 1
GridSample16
HardSigmoid6, 1
Hardmax13, 11, 1
Identity16, 14, 13, 1
If16, 13, 11, 1
InstanceNormalization6, 1
IsInf10
IsNaN13, 9
LRN13, 1
LSTM14, 7, 1
LeakyRelu16, 6, 1
Less13, 9, 7, 1
Log13, 6, 1
Loop16, 13, 11, 1
LpNormalization1
LpPool11, 2, 1
MatMul13, 9, 1
MatMulInteger10
Max13, 12, 8, 6, 1
MaxPool12, 11, 10, 8, 1
MaxRoiPool1
MaxUnpool11, 9
Mean13, 8, 6, 1
Min13, 12, 8, 6, 1
Mod13, 10
Mul14, 13, 7, 6, 1
Multinomial7
Neg13, 6, 1
NonMaxSuppression11, 10
NonZero13, 9
Not1
OneHot11, 9
Optional15
OptionalGetElement15
OptionalHasElement15
Or7, 1
PRelu16, 9, 7, 6, 1
Pad13, 11, 2, 1
Pow15, 13, 12, 7, 1
QLinearConv10
QLinearMatMul10
QuantizeLinear13, 10
RNN14, 7, 1
RandomNormal1
RandomNormalLike1
RandomUniform1
RandomUniformLike1
Reciprocal13, 6, 1
ReduceL113, 11, 1
ReduceL213, 11, 1
ReduceLogSum13, 11, 1
ReduceLogSumExp13, 11, 1
ReduceMax13, 12, 11, 1
ReduceMean13, 11, 1
ReduceMin13, 12, 11, 1
ReduceProd13, 11, 1
ReduceSum13, 11, 1
ReduceSumSquare13, 11, 1
Relu14, 13, 6, 1
Reshape14, 13, 5, 1
Resize13, 11, 10
ReverseSequence10
RoiAlign16, 10
Round11
Scan16, 11, 9, 8
Scatter (deprecated)11, 9
ScatterElements16, 13, 11
ScatterND16, 13, 11
Selu6, 1
SequenceAt11
SequenceConstruct11
SequenceEmpty11
SequenceErase11
SequenceInsert11
SequenceLength11
Shape15, 13, 1
Shrink9
Sigmoid13, 6, 1
Sign13, 9
Sin7
Sinh9
Size13, 1
Slice13, 11, 10, 1
Softplus1
Softsign1
SpaceToDepth13, 1
Split13, 11, 2, 1
SplitToSequence11
Sqrt13, 6, 1
Squeeze13, 11, 1
StringNormalizer10
Sub14, 13, 7, 6, 1
Sum13, 8, 6, 1
Tan7
Tanh13, 6, 1
TfIdfVectorizer9
ThresholdedRelu10
Tile13, 6, 1
TopK11, 10, 1
Transpose13, 1
Trilu14
Unique11
Unsqueeze13, 11, 1
Upsample (deprecated)10, 9, 7
Where16, 9
Xor7, 1
FunctionSince version
Bernoulli15
CastLike15
Celu12
DynamicQuantizeLinear11
GreaterOrEqual16, 12
HardSwish14
LessOrEqual16, 12
LogSoftmax13, 11, 1
MeanVarianceNormalization13, 9
NegativeLogLikelihoodLoss13, 12
Range11
Softmax13, 11, 1
SoftmaxCrossEntropyLoss13, 12
  • 4
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值