write-back/write-through/write-allocate/write-no-allocate说明

Cache读写策略

  • write-back
  • write-through
  • write-allocate
  • write-no-allocate

CPU 读 Cache 时:

  • 若 hit,则 CPU 直接从 Cache 中读取数据即可。

  • 若 miss,有两种处理方式:

    • Read through,即直接从内存中读取数据;
  • Read allocate,先把数据读取到 Cache 中,再从 Cache 中读数据。

执行写操作时:

先检查 cache 里是否有对应数据,如果有(write hit):

  • 根据是 write-back 还是 write-through 来具体操作:

    • write-back:将数据更新到 cache,并不更新到内存(DRAM),待后续 flush cache 时存入内存;

    • write-through:数据同时会更新到 cache 和内存;

如果没有(write miss):

  • 根据是write-allocate或是write-no-allocate:

    • write-allocate:将要写入的位置从内存读到cache,然后按照上述write hit继续操作;

    • write-no-allocate:不会将要写入的数据从内存读到cache,直接将要写的数据写入内存。

transient attribute

Another new memory attribute feature in the Armv8-M architecture is that Normal memory has a new Transient attribute. If an address region is marked as Transient it means the data within is unlikely to be frequently used. A cache design could, therefore, utilize this information to prioritize transient data for cacheline evictions. A cacheline eviction operation is needed when the processor needs to store a new piece of data into the cache but all of the cache-ways of the corresponding cache index have already been used by older valid data. In the case of the Cortex-M23 and Cortex-M33 processors, this attribute is not used as (a) there is no data cache support, and (b) the AHB interface does not have any signal for transient indication. Please note, even when an Armv8-M processor has a data cache transient support, is an optional feature. This is because this feature increases the SRAM area needed for cache tags and might, therefore, not be desirable for some designs.

from: https://www.sciencedirect.com/topics/engineering/memory-attribute

device memory attributes

https://blog.csdn.net/shenhuxi_yu/article/details/90617675

https://zhuanlan.zhihu.com/p/124946496

  • 2
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Traceback (most recent call last): File "/root/autodl-tmp/ultralytics-main/run.py", line 7, in <module> model.train(data='/root/autodl-tmp/ultralytics-main/traindata3/data.yaml') File "/root/autodl-tmp/ultralytics-main/ultralytics/yolo/engine/model.py", line 371, in train self.trainer.train() File "/root/autodl-tmp/ultralytics-main/ultralytics/yolo/engine/trainer.py", line 192, in train self._do_train(world_size) File "/root/autodl-tmp/ultralytics-main/ultralytics/yolo/engine/trainer.py", line 328, in _do_train preds = self.model(batch['img']) File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/autodl-tmp/ultralytics-main/ultralytics/nn/tasks.py", line 219, in forward return self._forward_once(x, profile, visualize) # single-scale inference, train File "/root/autodl-tmp/ultralytics-main/ultralytics/nn/tasks.py", line 70, in _forward_once x = m(x) # run File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/autodl-tmp/ultralytics-main/ultralytics/nn/modules/block.py", line 183, in forward return self.cv2(torch.cat(y, 1)) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 100.00 MiB (GPU 0; 23.65 GiB total capacity; 6.18 GiB already allocated; 98.56 MiB free; 6.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF terminate called without an active exception Aborted (core dumped)
06-02

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值