paddle.summary 的使用问题

今天在使用paddle.summary打印模型的时候出现了下面的错误:

---------------------------------------------------------------------------ValueError                                Traceback (most recent call last)<ipython-input-155-ec604926d5fd> in <module>
      5 recall_model=DNNRecallLayer(sparse_feature_number=600000, sparse_feature_dim=9, fc_sizes=fc_sizes)
      6 
----> 7 param_info = paddle.summary(recall_model,input_size=[(1,),(4,),(3,),(1,)])
      8 print(param_info)
      9 
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model_summary.py in summary(net, input_size, dtypes)
    147 
    148     _input_size = _check_input(_input_size)
--> 149     result, params_info = summary_string(net, _input_size, dtypes)
    150     print(result)
    151 
</opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/decorator.py:decorator-gen-342> in summary_string(model, input_size, dtypes)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/base.py in _decorate_function(func, *args, **kwargs)
    313         def _decorate_function(func, *args, **kwargs):
    314             with self:
--> 315                 return func(*args, **kwargs)
    316 
    317         @decorator.decorator
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/hapi/model_summary.py in summary_string(model, input_size, dtypes)
    274 
    275     # make a forward pass
--> 276     model(*x)
    277 
    278     # remove these hooks
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in __call__(self, *inputs, **kwargs)
    889                 self._built = True
    890 
--> 891             outputs = self.forward(*inputs, **kwargs)
    892 
    893             for forward_post_hook in self._forward_post_hooks.values():
<ipython-input-128-9651e9a4aa73> in forward(self, batch_size, user_sparse_inputs, mov_sparse_inputs, label_input)
     60         user_sparse_embed_seq = []
     61         for s_input in user_sparse_inputs:
---> 62             emb = self.embedding(s_input)
     63             emb = paddle.reshape(emb, shape=[-1, self.sparse_feature_dim])
     64             user_sparse_embed_seq.append(emb)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py in __call__(self, *inputs, **kwargs)
    889                 self._built = True
    890 
--> 891             outputs = self.forward(*inputs, **kwargs)
    892 
    893             for forward_post_hook in self._forward_post_hooks.values():
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/common.py in forward(self, x)
   1288             padding_idx=self._padding_idx,
   1289             sparse=self._sparse,
-> 1290             name=self._name)
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/functional/input.py in embedding(x, weight, padding_idx, sparse, name)
    200         return core.ops.lookup_table_v2(
    201             weight, x, 'is_sparse', sparse, 'is_distributed', False,
--> 202             'remote_prefetch', False, 'padding_idx', padding_idx)
    203     else:
    204         helper = LayerHelper('embedding', **locals())
ValueError: (InvalidArgument) Tensor holds the wrong type, it holds float, but desires to be int64_t.
  [Hint: Expected valid == true, but received valid:0 != true:1.] (at /paddle/paddle/fluid/framework/tensor_impl.h:33)
  [operator < lookup_table_v2 > error]

我的代码为:

# 定义训练的轮次
epochs=3
# 定义模型
fc_sizes=[512, 256, 128, 32]
recall_model=DNNRecallLayer(sparse_feature_number=600000, sparse_feature_dim=9, fc_sizes=fc_sizes)
param_info = paddle.summary(recall_model,input_size=[(2,),(4,),(3,),(1,)])
print(param_info)

解决方法

# 定义训练的轮次
epochs=3
# 定义模型
fc_sizes=[512, 256, 128, 32]
recall_model=DNNRecallLayer(sparse_feature_number=600000, sparse_feature_dim=9, fc_sizes=fc_sizes)
param_info = paddle.summary(recall_model,input_size=[(2,),(4,),(3,),(1,)],dtypes=int)
print(param_info)

这样就行了,需要设置一个dtypes参数

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
对LeNet模型进行优化可以从以下几个方面入手: 1. 使用更加先进的卷积神经网络模型,如ResNet、Inception等,可以提升模型的准确率和泛化能力。 2. 使用更加高效的优化算法,如Adam、SGD等,可以加速模型训练过程,并提高训练效果。 3. 增加数据增强技术,如旋转、翻转、缩放等,可以扩大数据集,提高模型的泛化能力。 4. 使用更加精细的超参数调整方法,如网格搜索、贝叶斯优化等,可以找到更加优秀的超参数组合,提高模型的性能。 下面是使用更加高效的优化算法Adam对LeNet模型进行优化的代码: ``` import paddle class LeNet(paddle.nn.Layer): def __init__(self): super(LeNet, self).__init__() # 创建卷积和池化层块,每个卷积层使用relu激活函数,后面跟着一个2x2的池化 self.conv1 = paddle.nn.Conv2D(3, 32, 3, 1, 1) self.relu1 = paddle.nn.ReLU() self.max_pool1 = paddle.nn.MaxPool2D(2, 2) self.conv2 = paddle.nn.Conv2D(32, 64, 3, 1, 1) self.relu2 = paddle.nn.ReLU() self.max_pool2 = paddle.nn.MaxPool2D(2, 2) self.avg_pool = paddle.nn.AdaptiveAvgPool2D(1) self.linear= paddle.nn.Linear(64, 2) # 网络的前向计算过程 def forward(self, x): x = self.max_pool1(self.relu1(self.conv1(x))) x = self.max_pool2(self.relu2(self.conv2(x))) x = self.avg_pool(x) x = paddle.reshape(x, [x.shape[0],-1]) x = self.linear(x) return x # 使用Adam优化器 optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters()) # 创建模型实例 model = paddle.Model(LeNet()) # 编译模型 model.prepare(optimizer=optimizer, loss=paddle.nn.CrossEntropyLoss(), metrics=paddle.metric.Accuracy()) # 打印模型结构和参数量 model.summary((-1,3,256,256)) ``` 在代码中,我们使用了更加高效的Adam优化算法,并将其作为优化器传入模型中进行训练。同时,我们还使用paddle.nn.AdaptiveAvgPool2D代替了原来的自定义平均池化层,简化了代码。最后,使用paddle.Model的prepare方法编译了模型,并使用model.summary方法打印了模型结构和参数量。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

农民小飞侠

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值