转载自: http://www.voidcn.com/article/p-qxdokkft-bqd.html
Devicetree 提供了两种方式预留内存: reserved-memory和memreserve
memreserve示例
/memreserve/ 0x40000000 0x01000000
reserved-memory示例
reserved-memory {
#address-cells = <1>;
#size-cells = <1>;
ranges;
ipu_cma@90000000 {
compatible = "shared-dma-pool";
reg = <0x90000000 0x4000000>;
reusable;
status = "okay";
};
区别1:
二者在dtc编译时中处理的方法不同, reserved-memory做为device tree node解析到device-tree structure中; memreserve最终会加到dtb文件的memory reserve map,
见下图
区别2
二者在内核中的处理方式不同
1 memreserve处理流程
start_kernel - init/main.c
->setup_arch - arch/arm/kernel/setup.c
->arm_memblock_init - arch/arm/kernel/setup.c
->arm_dt_memblock_reserve - arch/arm/kernel/devtree.c
arm_dt_memblock_reserve实现如下
/* Reserve the dtb region */
memblock_reserve(virt_to_phys(initial_boot_params),
be32_to_cpu(initial_boot_params->totalsize));
/*
* Process the reserve map. This will probably overlap the initrd
* and dtb locations which are already reserved, but overlaping
* doesn't hurt anything
*/
reserve_map = ((void*)initial_boot_params) +
be32_to_cpu(initial_boot_params->off_mem_rsvmap);
while (1) {
base = be64_to_cpup(reserve_map++);
size = be64_to_cpup(reserve_map++);
if (!size)
break;
memblock_reserve(base, size);
}
initial_boot_params实际指向dtb文件在内存中的位置, 该地址还可以指向其他类型的启动参数
initial_boot_params头中的off_mem_rsvmap指向一系列的reserve memory(地址, 尺寸)空间, 对于dtb来说, 就是memory reserve map.
2 reserved-memory处理流程
start_kernel - init/main.c
->setup_arch - arch/arm/kernel/setup.c
->arm_memblock_init - arch/arm/kernel/setup.c
->early_init_fdt_scan_reserved_mem - arch/arm/mm/init.c
->__fdt_scan_reserved_mem, - drivers/of/fdt.c
->__reserved_mem_reserve_reg - drivers/of/fdt.c
->early_init_dt_reserve_memory_arch - drivers/of/fdt.c
->memblock_remove - mm/memblock.c
->memblock_reserve - mm/memblock.c
->fdt_init_reserved_mem
reserved-memory有一些可选参数, 比如no-map, 如果使用了no-map, 那么这段区域执行memblock_remove, 反之执行memblock_reserve.
在调用完memblock_reserve后,还会执行fdt_init_reserved_mem
void __init fdt_init_reserved_mem(void)
{
...
if (rmem->size == 0)
err = __reserved_mem_alloc_size(node, rmem->name,
&rmem->base, &rmem->size);
if (err == 0)
__reserved_mem_init_node(rmem);
...
}
/**
* res_mem_init_node() - call region specific reserved memory init code
*/
static int __init __reserved_mem_init_node(struct reserved_mem *rmem)
{
...
for (i = __reservedmem_of_table; i < &__rmem_of_table_sentinel; i++) {
reservedmem_of_init_fn initfn = i->data;
const char *compat = i->compatible;
if (!of_flat_dt_is_compatible(rmem->fdt_node, compat))
continue;
if (initfn(rmem) == 0) {
pr_info("Reserved memory: initialized node %s, compatible id %s\n",
rmem->name, compat);
return 0;
}
}
return -ENOENT;
}
如果reserved-memory下节点的compatible=<shared-dma-pool>, 则这块内存会被用来进行Contiguous Memory Allocator for dma
initfn对应drivers/base/dma-contiguous.c下的rmem_cma_setup以及drivers/base/dma-coherent.c中的rmem_dma_setup, 由于二者的compatible相同,所以前者优先.
rmem_cma_setup会对这块内存做初始化, 把这块区域加到cma_areas[cma_area_count]中
cma_areas保存着所有的CMA区域, 稍后core_init_reserved_areas会对这个数组进行处理
static int __init cma_init_reserved_areas(void)
{
int i;
for (i = 0; i < cma_area_count; i++) {
int ret = cma_activate_area(&cma_areas[i]);
if (ret)
return ret;
}
return 0;
}
core_initcall(cma_init_reserved_areas);
cma_activate_area把该cma area中的所有pages都改为MIGRATE_CMA, 并加到MIGRATE_CMA的free_list上.
区别3
由于二者的处理流程不同, 导致memreserve分配的内存, 无法再被操作系统使用; 而reserved-memory内存有可能进入系统CMA, 是否做为CMA, 依赖以下几个条件:
1. compatible 为shared-dma-pool;
(注: 也可如下新加一个cma,专门给vpp用)
[dma-contiguous.c] +RESERVEDMEM_OF_DECLARE(cma1, "vpp-cma", rmem_cma_setup);
+ ion_for_vpp: ion {
+ compatible = "vpp-cma";
+ //compatible = "shared-dma-pool";
+ reg = <0x1 0x80000000 0x0 0x40000000>; // 1G
+ reusable;
+ };
heap_carveout@0 {
compatible = "bitmain,cma_vpp";
- memory-region = <&cma_reserved>;
+ memory-region = <&ion_for_vpp>;
};
)
2. 没有定义no-map属性
3. 定义了resuable属性