graph optimizer选项

Dictionary of experimental optimizer options to configure. Valid keys:

  • layout_optimizer: Optimize tensor layouts e.g. This will try to use NCHW layout on GPU which is faster.
  • constant_folding: Fold constants Statically infer the value of tensors when possible, and materialize the result using
    constants.
  • shape_optimization: Simplify computations made on shapes.remapping: Remap subgraphs onto more efficient implementations.
  • arithmetic_optimization: Simplify arithmetic ops with common sub-expression elimination and arithmetic simplification.
  • dependency_optimization: Control dependency optimizations. Remove redundant control dependencies, which may enable other optimization. This optimizer is also essential for pruning Identity and NoOp nodes.
  • loop_optimization: Loop optimizations.
  • function_optimization: Function optimizations and inlining.
  • debug_stripper: Strips debug-related nodes from the graph.
  • disable_model_pruning: Disable removal of unnecessary ops from the graphscoped_allocator_optimization: Try to allocate some independent Op outputs contiguously in order to merge or eliminate downstream Ops.
  • pin_to_host_optimization: Force small ops onto the CPU.implementation_selector: Enable the swap of kernel implementations based on the device placement.
  • auto_mixed_precision: Change certain float32 ops to float16 on Volta GPUs and above. Without the use of loss scaling, this can cause numerical underflow (see keras.mixed_precision.experimental.LossScaleOptimizer).
  • disable_meta_optimizer: Disable the entire meta optimizer.
  • min_graph_nodes: The minimum number of nodes in a graph to optimizer. For smaller graphs, optimization is skipped.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值