Linux内核-内存管理(二),关键数据结构

内存管理关键数据结构
1.结点管理
pg_data_t是用于表示结点的基本元素,定义如下:
在这里插入图片描述
1.node_zone是一个数组,包含了结点各内存域的数据结构
2.node_zonelists指定了备用结点以及内存域的列表,以便在当前结点没有内存时,从备用结点分配内存。
3.结点中不同内存域的数目保存在nr_zones.
4.node_mem_map是指向page实例数组的指针,用于描述结点内所有物理内存页。它包含了结点内所有内存域的。
5.在系统启动期间,内存管理子系统初始化之前,内核页需要使用内存(另外还必须保留部分内存用于初始化内存管理子系统)。为解决这个问题,内核使用了自举内存分配器( boot memory allocator)。bdata是指向自举内存分配器数据结构的实例。
6.node_start_pfn是该NUMA结点第一个页帧的逻辑编号。系统中所有结点的页帧是依次编号的,每个页帧的编号都是全局唯一的。
node_start_pfn在UMA系统中总是0,因为其中只有一个结点,因此其第一个页帧编号总是0。
node_present_pages指定了结点中页帧的数目,而node_spanned_pages则给出了该节点以页帧为单位计算的长度。二者的值可能不同,因为结点中可能存在一些空洞,并不真正的对应页帧。
7.node_id,全局结点ID。系统中NUAM结点都从0开始编号。
8.pgdat_next,链接下一个结点,系统中所有结点都通过单链表连接起来,其末尾通过空指针标记。
9.kswapd_wait是交换守护进程的等待队列,在将页帧换出结点时会用到。kawapd指向负责该结点的交换守护进程的task_struct。kswapd_max_order用于页交换子系统的实现,用来定义需要释放的区域的长度(我们当前不感兴趣时)。
结点状态管理
如果系统中结点 多于一个,内核会维护一个位图,用以提供各个结点的状态信息,状态位是用掩码标识的,可以使用下列值。在这里插入图片描述
对内存管理有必要的标志是N_HIGH_MEMORY和N_NORMAL_MEMORY。如果结点有普通或高端内存则使用N_HIGH_MEMORY,仅当结点没有高端内存才设置N_NORMAL_MEMORY。
两个辅助函数用来设置或清除位域或特定结点中的一个比特位:
<nodemask.h>
void node_set_state(int node, enum node_states state)
void node_clear_state(int node, enum node_states state)
此外,宏for_each_node_state(__node, __state)用来迭代处于特定状态的所有结点,而foreach_online_node(node)则迭代所有活动结点。如果内核编译为只支持单个结点(即使用平坦内存模型),则没有结点位图,上述操作该位图的函数则变为空操作。
2.内存域
内核使用zone结构来描述内存域,其定义如下:

struct zone {
	/* Read-mostly fields */

	/* zone watermarks, access with *_wmark_pages(zone) macros */
	unsigned long _watermark[NR_WMARK];
	unsigned long watermark_boost;

	unsigned long nr_reserved_highatomic;

	/*
	 * We don't know if the memory that we're going to allocate will be
	 * freeable or/and it will be released eventually, so to avoid totally
	 * wasting several GB of ram we must reserve some of the lower zone
	 * memory (otherwise we risk to run OOM on the lower zones despite
	 * there being tons of freeable ram on the higher zones).  This array is
	 * recalculated at runtime if the sysctl_lowmem_reserve_ratio sysctl
	 * changes.
	 */
	long lowmem_reserve[MAX_NR_ZONES];

#ifdef CONFIG_NUMA
	int node;
#endif
	struct pglist_data	*zone_pgdat;
	struct per_cpu_pageset __percpu *pageset;

#ifndef CONFIG_SPARSEMEM
	/*
	 * Flags for a pageblock_nr_pages block. See pageblock-flags.h.
	 * In SPARSEMEM, this map is stored in struct mem_section
	 */
	unsigned long		*pageblock_flags;
#endif /* CONFIG_SPARSEMEM */

	/* zone_start_pfn == zone_start_paddr >> PAGE_SHIFT */
	unsigned long		zone_start_pfn;

	/*
	 * spanned_pages is the total pages spanned by the zone, including
	 * holes, which is calculated as:
	 * 	spanned_pages = zone_end_pfn - zone_start_pfn;
	 *
	 * present_pages is physical pages existing within the zone, which
	 * is calculated as:
	 *	present_pages = spanned_pages - absent_pages(pages in holes);
	 *
	 * managed_pages is present pages managed by the buddy system, which
	 * is calculated as (reserved_pages includes pages allocated by the
	 * bootmem allocator):
	 *	managed_pages = present_pages - reserved_pages;
	 *
	 * So present_pages may be used by memory hotplug or memory power
	 * management logic to figure out unmanaged pages by checking
	 * (present_pages - managed_pages). And managed_pages should be used
	 * by page allocator and vm scanner to calculate all kinds of watermarks
	 * and thresholds.
	 *
	 * Locking rules:
	 *
	 * zone_start_pfn and spanned_pages are protected by span_seqlock.
	 * It is a seqlock because it has to be read outside of zone->lock,
	 * and it is done in the main allocator path.  But, it is written
	 * quite infrequently.
	 *
	 * The span_seq lock is declared along with zone->lock because it is
	 * frequently read in proximity to zone->lock.  It's good to
	 * give them a chance of being in the same cacheline.
	 *
	 * Write access to present_pages at runtime should be protected by
	 * mem_hotplug_begin/end(). Any reader who can't tolerant drift of
	 * present_pages should get_online_mems() to get a stable value.
	 */
	atomic_long_t		managed_pages;
	unsigned long		spanned_pages;
	unsigned long		present_pages;

	const char		*name;

#ifdef CONFIG_MEMORY_ISOLATION
	/*
	 * Number of isolated pageblock. It is used to solve incorrect
	 * freepage counting problem due to racy retrieving migratetype
	 * of pageblock. Protected by zone->lock.
	 */
	unsigned long		nr_isolate_pageblock;
#endif

#ifdef CONFIG_MEMORY_HOTPLUG
	/* see spanned/present_pages for more description */
	seqlock_t		span_seqlock;
#endif

	int initialized;

	/* Write-intensive fields used from the page allocator */
	ZONE_PADDING(_pad1_)

	/* free areas of different sizes */
	struct free_area	free_area[MAX_ORDER];

	/* zone flags, see below */
	unsigned long		flags;

	/* Primarily protects free_area */
	spinlock_t		lock;

	/* Write-intensive fields used by compaction and vmstats. */
	ZONE_PADDING(_pad2_)

	/*
	 * When free pages are below this point, additional steps are taken
	 * when reading the number of free pages to avoid per-cpu counter
	 * drift allowing watermarks to be breached
	 */
	unsigned long percpu_drift_mark;

#if defined CONFIG_COMPACTION || defined CONFIG_CMA
	/* pfn where compaction free scanner should start */
	unsigned long		compact_cached_free_pfn;
	/* pfn where compaction migration scanner should start */
	unsigned long		compact_cached_migrate_pfn[ASYNC_AND_SYNC];
	unsigned long		compact_init_migrate_pfn;
	unsigned long		compact_init_free_pfn;
#endif

#ifdef CONFIG_COMPACTION
	/*
	 * On compaction failure, 1<<compact_defer_shift compactions
	 * are skipped before trying again. The number attempted since
	 * last failure is tracked with compact_considered.
	 * compact_order_failed is the minimum compaction failed order.
	 */
	unsigned int		compact_considered;
	unsigned int		compact_defer_shift;
	int			compact_order_failed;
#endif

#if defined CONFIG_COMPACTION || defined CONFIG_CMA
	/* Set to true when the PG_migrate_skip bits should be cleared */
	bool			compact_blockskip_flush;
#endif

	bool			contiguous;

	ZONE_PADDING(_pad3_)
	/* Zone statistics */
	atomic_long_t		vm_stat[NR_VM_ZONE_STAT_ITEMS];
	atomic_long_t		vm_numa_stat[NR_VM_NUMA_STAT_ITEMS];
} ____cacheline_internodealigned_in_smp;
	该结构特殊的方面在于它由ZONE_PADDING分割为几个部分。这是因为对zone结构的访问非常频繁。在多处理器系统上,通常有多个CPU访问结构成员。因此会使用锁防止它们彼此干扰,避免错误和不一致。由于内核对该数据结构访问非常频繁,因此会经常性的获取该结构的两个自旋锁zone->lock和zone->lrulock。
	如果数据保存在CPU高速缓存中,那么会处理的更快速。高速缓存分为行,每一行负责不同的内存区。内核使用ZONE_PADDING宏生成填充字段添加到结构中,以确保每个自旋锁都处于自身的缓存行中
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值