setup_arch:arm64_memblock_init

kernel: 5.10
arch: arm64

arm64_memblock_init主要是在setup_arch时会被调用,在它之前会通过setup_machine_fdt(__fdt_pointer)解析fdt中的memory节点,为之初始化memblock.memory下的各个memblock_region,每个memblock_region可理解为一个物理内存bank区域。

arm64_memblock_init主要是通过memblock_remove将某些memblock_region区域从memblock.memory中移除,这些区域包含了DDR物理地址所不包含的区域,以及内核线性映射区所不能涵盖的区域;同时将某些物理区间添加到memblock.reserved中,这额区间包含dts中预留区域,命令行中通过参数预留的CMA区域,内核的代码段、initrd、页表、数据段等所在区域,crash kernel保留区域以及elf相关区域。这个过程中也会初始化一些全局变量,如物理内存起始地址memstart_addr

void __init arm64_memblock_init(void)
{
		// 获取内核线性映射区大小
        const s64 linear_region_size = BIT(vabits_actual - 1);
        
        /* Handle linux,usable-memory-range property */
        fdt_enforce_memory_region();

        /* Remove memory above our supported physical address size */
        memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);

        /*  
         * Select a suitable value for the base of physical memory.
         */
        memstart_addr = round_down(memblock_start_of_DRAM(),
                                   ARM64_MEMSTART_ALIGN);

        /*  
         * Remove the memory that we will not be able to cover with the
         * linear mapping. Take care not to clip the kernel which may be
         * high in memory.
         */
        memblock_remove(max_t(u64, memstart_addr + linear_region_size,
                        __pa_symbol(_end)), ULLONG_MAX);
        if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
                /* ensure that memstart_addr remains sufficiently aligned */
                memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
                                         ARM64_MEMSTART_ALIGN);
                memblock_remove(0, memstart_addr);
        }   

        /*  
         * If we are running with a 52-bit kernel VA config on a system that
         * does not support it, we have to place the available physical
         * memory in the 48-bit addressable part of the linear region, i.e.,
         * we have to move it upward. Since memstart_addr represents the
         * physical address of PAGE_OFFSET, we have to *subtract* from it.
         */
        if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52))
                memstart_addr -= _PAGE_OFFSET(48) - _PAGE_OFFSET(52);

        /*  
         * Apply the memory limit if it was set. Since the kernel may be loaded
         * high up in memory, add back the kernel region that must be accessible
         * via the linear mapping.
         */
        if (memory_limit != PHYS_ADDR_MAX) {
                memblock_mem_limit_remove_map(memory_limit);
                memblock_add(__pa_symbol(_text), (u64)(_end - _text));
        }   

        if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {
                /*  
                 * Add back the memory we just removed if it results in the
                 * initrd to become inaccessible via the linear mapping.
                 * * Otherwise, this is a no-op
                 */
                u64 base = phys_initrd_start & PAGE_MASK;
                u64 size = PAGE_ALIGN(phys_initrd_start + phys_initrd_size) - base;

                /*
                 * We can only add back the initrd memory if we don't end up
                 * with more memory than we can address via the linear mapping.
                 * It is up to the bootloader to position the kernel and the
                 * initrd reasonably close to each other (i.e., within 32 GB of
                 * each other) so that all granule/#levels combinations can
                 * always access both.
                 */
                if (WARN(base < memblock_start_of_DRAM() ||
                         base + size > memblock_start_of_DRAM() +
                                       linear_region_size,
                        "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
                        phys_initrd_size = 0;
                } else {
                        memblock_remove(base, size); /* clear MEMBLOCK_ flags */
                        memblock_add(base, size);
                        memblock_reserve(base, size);
                }
        }

        if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
                extern u16 memstart_offset_seed;
                u64 range = linear_region_size -
                            (memblock_end_of_DRAM() - memblock_start_of_DRAM());

                /*
                 * If the size of the linear region exceeds, by a sufficient
                 * margin, the size of the region that the available physical
                 * memory spans, randomize the linear region as well.
                 */
                if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) {
                        range /= ARM64_MEMSTART_ALIGN;
                        memstart_addr -= ARM64_MEMSTART_ALIGN *
                                         ((range * memstart_offset_seed) >> 16);
                }
        }

        /*
         * Register the kernel text, kernel data, initrd, and initial
         * pagetables with memblock.
         */
        memblock_reserve(__pa_symbol(_text), _end - _text);
        if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && phys_initrd_size) {
                /* the generic initrd code expects virtual addresses */
                initrd_start = __phys_to_virt(phys_initrd_start);
                initrd_end = initrd_start + phys_initrd_size;
        }

        early_init_fdt_scan_reserved_mem();

        if (IS_ENABLED(CONFIG_ZONE_DMA)) {
        		zone_dma_bits = ARM64_ZONE_DMA_BITS;
                arm64_dma_phys_limit = max_zone_phys(ARM64_ZONE_DMA_BITS);
        }

        if (IS_ENABLED(CONFIG_ZONE_DMA32))
                arm64_dma32_phys_limit = max_zone_phys(32);
        else
                arm64_dma32_phys_limit = PHYS_MASK + 1;

        reserve_crashkernel();

        reserve_elfcorehdr();

        high_memory = __va(memblock_end_of_DRAM() - 1) + 1;

        dma_contiguous_reserve(arm64_dma32_phys_limit);
}
  1. arm64_memblock_init首先获取线性区域大小linear_region_size,它与实际的虚拟地址空间相关,本例为1 << 47, 也就是linear_region_size为0x800000000000,因此线性地址空间为0xffff000000000000(PAGE_OFFSET) ~ 0xffff800000000000 ?

  2. fdt_enforce_memory_region:

  3. memblock相关全局变量初始化
    (1)memstart_addr初始化
    将DDR的起始物理地址按section size(1G)向下对齐,赋值给memstart_addr,memstart_addr是一个全局变量,在很多地方都会用到,如在pfn_to_page中用于计算物理页帧号时用到
    (2)high_memor初始化为DDR物理结束地址
    (3)arm64_dma32_phys_limit:初始化dma32 zone最大的结束地址
    zone_dma_bits:dma zone最大位数初始化,此处为30bit
    arm64_dma_phys_limit: 初始化dma zone最大的结束地址
    initrd_start:initrd物理区域对应的起始虚拟地址
    initrd_end:initrd物理区域对应的结束虚拟地址
    注:关于initrd_start可参考 如何获取initrd起始地址

  4. memblock_remove
    主要从memblock.memory上移除如下的memblock_region的物理区间:
    (1)不支持的物理地址空间所在的memblock_region区间:(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
    因为所支持最高物理地址为48bit,因此这个物理地址区域为[0x0001000000000000-0xffffffffffffffff]
    (2)移除内核线性区无法映射的物理区域,这部分区域由于相当于是高端内存,内核的线性映射区无法映射
    (3)移除[0, memstart_addr]的物理区间
    (4)移除dts中定义的reserved memory节点中定义的内存区间,注意只会移除有nomap属性的,对于cma会具有reusable属性不会移除,nomap属性意味着内核不能将这段物理内存映射到虚拟地址空间,只能由各个驱动使用者来映射

  5. memblock_reserve,为物理区间查找获取memblock_region并添加到memblock.reserved中
    (0)对于使能CONFIG_BLK_DEV_INITRD,空间[phys_initrd_start, phys_initrd_start+phys_initrd_size]?主要用通过memblock_remove,memblock_add清空这段区域的标志位,并通过memblock_reserve将这段区间预留
    (1)预留kernel text, kernel data, initrd, and initial pagetables
    (2)通过early_init_fdt_scan_reserved_mem预留查找到“reserved-memory”的节点,将每个子节点中的区域预留,同时根据nomap属性如果有定义,则还要从memblock.memory中删除
    (3)通过reserve_crashkernel为crash kernel预留内存;
    (4)通过reserve_elfcorehdr为elf core header预留内存;
    (5)通过dma_contiguous_reserve为CMA预留连续的物理内存,注意此处主要是通过命令行预留,如果命令行无预留,则此不会执行

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值