| name | about | labels |
| ---------- | ------------------------------------- | -------- |
| Bug Report | Use tutorials_code for reporting a bug | bug |
## Environment
- **CPU**:
- **Software Environment**:
-- MindSpore version :mindspore/mindspore-cpu:0.1.0-alpha
-- Python 3.7.5
-- OS platform: MacOS Docker19.03.8
## Describe the current behavior
tutorials_code中的示例:[cifar_resnet50.py](https://gitee.com/mindspore/docs/blob/master/tutorials/tutorial_code/resnet/cifar_resnet50.py) 是在Ascend上训练的,由于我安装的是cpu版本的me,所以将这行代码:
```
context.set_context(mode=context.GRAPH_MODE, device_target="Ascend")
```
改成:
```
context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
```
会报如下错误:
```
RuntimeError: mindspore/ccsrc/device/cpu/cpu_session.cc:115 BuildKernel] Operator[InitDataSetQueue] is not support.
```
我发现,CPU环境不支持dataset_sink_mode=True,但是train的时候是默认dataset_sink_mode=True的,于是我在这里加上了:
```
model.train(epoch_size, dataset, callbacks=[ckpoint_cb, loss_cb],dataset_sink_mode=False)
```
解决了上面出现的问题,但是运行时会自动被kill:
```
[WARNING] ME(2987:139919646736512,MainProcess):2020-03-30-10:13:22.847.454 [mindspore/dataset/engine/datasets.py:823] Repeat is located before batch, data from two epochs can be batched together.
Killed
```
我查了下docker容器里的使用情况,空间也足够
```
root@b5a013bc4b55:~/docs/tutorials/tutorial_code/resnet# free -h
total used free shared buff/cache available
Mem: 1.9G 123M 1.7G 4K 91M 1.7G
Swap: 1.0G 341M 682M
```
请问这个应该如何改呢?我在跑lenet.py的示例时,也是将device_target改成了CPU,可以正常运行,不知道为什么在这里就会报错 T_T
>ps:在处理数据时,这里有两行代码写反了,建议先用batch再用repeat
```
cifar_ds = cifar_ds.shuffle(buffer_size=10)
cifar_ds =cifar_ds.batch(batch_size=args_opt.batch_size,drop_remainder=True)
cifar_ds = cifar_ds.repeat(repeat_num))
```