vmap; kmap ; remap_vmalloc_range ; remap_pfn_range

本文详细介绍了Linux内核中的vmap和kmap函数,用于将物理页面映射到内核虚拟空间。vmap适用于长时间映射多个物理页面,需要全局同步来解除映射;而kmap则用于短期映射单个页面,适合低频次访问,不推荐在嵌套环境中使用。同时,文章还提到了remap_pfn_range和remap_vmalloc_range,它们分别用于将物理页框和vmalloc分配的内存映射到用户空间。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

kernel地址map: vmap kmap
vmap(). This can be used to make a long duration mapping of multiple physical pages into a contiguous virtual space. It needs global synchronization to unmap.

/**
 * vmap - map an array of pages into virtually contiguous space
 * @pages: array of page pointers
 * @count: number of pages to map
 * @flags: vm_area->flags
 * @prot: page protection for the mapping
 *
 * Maps @count pages from @pages into contiguous kernel virtual space.
 * If @flags contains %VM_MAP_PUT_PAGES the ownership of the pages array itself
 * (which must be kmalloc or vmalloc memory) and one reference per pages in it
 * are transferred from the caller to vmap(), and will be freed / dropped when
 * vfree() is called on the return value.
 *
 * Return: the address of the area or %NULL on failure
 */
void *vmap(struct page **pages, unsigned int count,
	   unsigned long flags, pgprot_t prot)
{
	struct vm_struct *area;
	unsigned long size;		/* In bytes */

	might_sleep();

	if (count > totalram_pages())
		return NULL;

	size = (unsigned long)count << PAGE_SHIFT;
	area = get_vm_area_caller(size, flags, __builtin_return_address(0));
	if (!area)
		return NULL;

	if (map_kernel_range((unsigned long)area->addr, size, pgprot_nx(prot),
			pages) < 0) {
		vunmap(area->addr);
		return NULL;
	}

	if (flags & VM_MAP_PUT_PAGES) {
		area->pages = pages;
		area->nr_pages = count;
	}
	return area->addr;
}
EXPORT_SYMBOL(vmap);

kmap(). This permits a short duration mapping of a single page. It needs global synchronization, but is amortized somewhat. It is also prone to deadlocks when using in a nested fashion, and so it is not recommended for new code.

/**
 * kmap - Map a page for long term usage
 * @page:	Pointer to the page to be mapped
 *
 * Returns: The virtual address of the mapping
 *
 * Can only be invoked from preemptible task context because on 32bit
 * systems with CONFIG_HIGHMEM enabled this function might sleep.
 *
 * For systems with CONFIG_HIGHMEM=n and for pages in the low memory area
 * this returns the virtual address of the direct kernel mapping.
 *
 * The returned virtual address is globally visible and valid up to the
 * point where it is unmapped via kunmap(). The pointer can be handed to
 * other contexts.
 *
 * For highmem pages on 32bit systems this can be slow as the mapping space
 * is limited and protected by a global lock. In case that there is no
 * mapping slot available the function blocks until a slot is released via
 * kunmap().
 */
static inline void *kmap(struct page *page);

kernel 地址到用户空间的map remap_vmalloc_range remap_pfn_range
remap_vmalloc_range — map vmalloc pages to userspace

/**
 * remap_pfn_range - remap kernel memory to userspace
 * @vma: user vma to map to
 * @addr: target page aligned user address to start at
 * @pfn: page frame number of kernel physical memory address
 * @size: size of mapping area
 * @prot: page protection flags for this mapping
 *
 * Note: this is only safe if the mm semaphore is held when called.
 *
 * Return: %0 on success, negative error code otherwise.
 */
int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
		    unsigned long pfn, unsigned long size, pgprot_t prot)
{
	pgd_t *pgd;
	unsigned long next;
	unsigned long end = addr + PAGE_ALIGN(size);
	struct mm_struct *mm = vma->vm_mm;
	unsigned long remap_pfn = pfn;
	int err;

	if (WARN_ON_ONCE(!PAGE_ALIGNED(addr)))
		return -EINVAL;

	/*
	 * Physically remapped pages are special. Tell the
	 * rest of the world about it:
	 *   VM_IO tells people not to look at these pages
	 *	(accesses can have side effects).
	 *   VM_PFNMAP tells the core MM that the base pages are just
	 *	raw PFN mappings, and do not have a "struct page" associated
	 *	with them.
	 *   VM_DONTEXPAND
	 *      Disable vma merging and expanding with mremap().
	 *   VM_DONTDUMP
	 *      Omit vma from core dump, even when VM_IO turned off.
	 *
	 * There's a horrible special case to handle copy-on-write
	 * behaviour that some programs depend on. We mark the "original"
	 * un-COW'ed pages by matching them up with "vma->vm_pgoff".
	 * See vm_normal_page() for details.
	 */
	if (is_cow_mapping(vma->vm_flags)) {
		if (addr != vma->vm_start || end != vma->vm_end)
			return -EINVAL;
		vma->vm_pgoff = pfn;
	}

	err = track_pfn_remap(vma, &prot, remap_pfn, addr, PAGE_ALIGN(size));
	if (err)
		return -EINVAL;

	vma->vm_flags |= VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;

	BUG_ON(addr >= end);
	pfn -= addr >> PAGE_SHIFT;
	pgd = pgd_offset(mm, addr);
	flush_cache_range(vma, addr, end);
	do {
		next = pgd_addr_end(addr, end);
		err = remap_p4d_range(mm, pgd, addr, next,
				pfn + (addr >> PAGE_SHIFT), prot);
		if (err)
			break;
	} while (pgd++, addr = next, addr != end);

	if (err)
		untrack_pfn(vma, remap_pfn, PAGE_ALIGN(size));

	return err;
}
EXPORT_SYMBOL(remap_pfn_range);

/**
 * remap_vmalloc_range - map vmalloc pages to userspace
 * @vma:		vma to cover (map full range of vma)
 * @addr:		vmalloc memory
 * @pgoff:		number of pages into addr before first page to map
 *
 * Returns:	0 for success, -Exxx on failure
 *
 * This function checks that addr is a valid vmalloc'ed area, and
 * that it is big enough to cover the vma. Will return failure if
 * that criteria isn't met.
 *
 * Similar to remap_pfn_range() (see mm/memory.c)
 */
int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
						unsigned long pgoff)
{
	return remap_vmalloc_range_partial(vma, vma->vm_start,
					   addr, pgoff,
					   vma->vm_end - vma->vm_start);
}
EXPORT_SYMBOL(remap_vmalloc_range);
### 关于 `xil_defaultlib` 的作用 在 Xilinx Vivado 中,`xil_defaultlib` 是一个默认库名称,用于存储综合后的设计文件以及仿真模型。当用户创建一个新的项目并完成 RTL 综合时,Vivado 自动将生成的结果存入此库中[^1]。该库通常位于项目的 simulation 文件夹下,并通过设置环境变量 `$XILINX_VIVADO/settings64.sh` 来定义其路径。 以下是关于 `xil_defaultlib` 的一些重要特性: #### 默认行为 - **自动管理**:如果未指定其他库,则所有模块都会被放置到 `xil_defaultlib` 下。 - **兼容性支持**:它主要用于简化多 IP 集成的设计流程,在不同版本的工具之间提供一致的行为。 ```bash source $XILINX_VIVADO/settings64.sh vlib work vmap xil_defaultlib ./simulation/xil_defaultlib/ ``` 上述脚本展示了如何初始化工作目录并将 `xil_defaultlib` 映射至特定位置[^2]。 --- ### 解决与内存相关的中断问题 针对 `/tools/Xilinx/Vivado/2019.1/bin/loader: line 267: 18166 Killed "$RDI_PROG" "$@"` 错误消息,这通常是由于系统资源不足引起的段错误 (Segmentation Fault),尤其是在处理大型设计时。以下是一些可能的原因及解决方案: #### 原因分析 1. **内存耗尽**:复杂设计可能导致 RAM 被完全占用,从而触发操作系统强制终止进程。 2. **临时文件溢出**:某些情况下,磁盘空间不足也会引发此类异常。 3. **软件 Bug 或不匹配版本**:旧版 Vivado 可能存在已知缺陷,需升级以修复这些问题。 #### 推荐措施 - 提高可用物理内存或启用交换分区来缓解压力; - 清理不必要的中间产物减少硬盘负担; - 更新至最新稳定发行版获取性能改进和安全补丁; 具体操作如下所示: ```tcl set_param synth.vivado.isSynthRun true close_project exec rm -rf .Xil/* open_project [lindex $argv 0] launch_runs impl_1 -to_step write_bitstream -jobs 8 wait_on_run impl_1 ``` 以上 Tcl 脚本片段可用于优化构建过程中的并发作业数(-jobs参数)[^3]。 --- ### 结论 综上所述,理解 `xil_defaultlib` 的功能对于有效管理和调试 Vivado 工程至关重要。同时,合理分配计算资源可以显著降低遇到致命运行时错误的风险。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值