内存碎片(Memory Fragmentation)

摘录:

A memory allocator needs to adhere to some basic memory allocation rules. For example, all memory allocations

must start at an address that is divisible by 4, 8, or 16, depending on the processor architecture.

There may also be other reasons for the memory allocator to only assign blocks of certain predefined sizes

to its clients (cache-line size perhaps). When a client requests a block of 43 bytes, it may well get 44 or

48, or an even larger number. This extra space that results from rounding the requested size upwards is

called internal fragmentation.
External fragmentation is created when the memory allocator allocates blocks with unused gaps in between

them. This can occur, for example, when an application allocates three blocks in succession and then frees

the one in the middle. The memory allocator might reuse the middle block for future allocations, but it is

no longer possible to allocate a block as large as all free memory.

Definition of Memory Fragmentation



http://blog.csdn.net/dean_yanqing/article/details/6663775

在看OSE的文档时(OSE Architecture User's Guide),注意到一段关于内存碎片的总结,写的很精辟,特摘抄如下:

   原文如下(翻译见原文后面的链接,某位老兄已然翻译成中文,但不知道原文出自何处^_^)

   Why Is Memory Fragmentation Important

In embedded systems development, memory fragmentation knowledge is crucial

Ultimately, memory fragmentation leads to out-of-memory conditions, even when there might be plenty of free memory left in the system.

最终,内存碎片会导致没有内存可申请,尽管总的剩余内存看起来可以很充足(但感觉这个说法不严谨,物理内存不够时,系统会用虚拟内存,硬盘顶替,所以系统应该可以持续运行很长时间,但最终可能内存耗尽)

  What Is Memory Fragmentation

Fragmented memory is the term used to describe all of a system's unusable free memory. These resources

remain unused, because the memory allocator that you are using is — for one reason or another — unable

to make this memory available to you.内存碎片是指系统中不可使用的空闲内存。因为内存分配器不能把这些内存分配使用

  Compile Time vs. Run Time

Memory is allocated in many different contexts. The programmer can (by way of the compiler and linker)

allocate memory for data in structures, unions, arrays, and scalars as local, static, or global variables. The

programmer can also allocate memory dynamically in run time using calls such as malloc(). When the

compiler and linker perform the memory allocation function, memory fragmentation does not arise, because

the compiler understands the lifetime of the data. The data lifetime offers the nice advantage of being

stackable (last in, first out). This makes it possible for the memory allocator to work in a very efficient and

non-fragmenting manner. Generally, memory allocations issued during run time are not stackable. Memory

allocations are independent in time, which makes the problem extremely difficult to resolve.

Internal/ External Fragmentation and Overhead

Memory allocators waste memory in three basic ways:

• Overhead

• Internal fragmentation

• External fragmentation

The memory allocator needs to store some data describing the state of its allocations. That space is generally

called overhead. This is information about the location, size, and ownership of any free blocks and about

other internal states. A run-time allocator typically has no better place to store this overhead information

than in the memory it manages.

A memory allocator needs to adhere to some basic memory allocation rules. For example, all memory allocations

must start at an address that is divisible by 4, 8, or 16, depending on the processor architecture.

There may also be other reasons for the memory allocator to only assign blocks of certain predefined sizes

to its clients (cache-line size perhaps). When a client requests a block of 43 bytes, it may well get 44 or

48, or an even larger number. This extra space that results from rounding the requested size upwards is

called internal fragmentation.

External fragmentation is created when the memory allocator allocates blocks with unused gaps in between

them. This can occur, for example, when an application allocates three blocks in succession and then frees

the one in the middle. The memory allocator might reuse the middle block for future allocations, but it is

no longer possible to allocate a block as large as all free memory.

External fragmentation is created when the memory allocator allocates blocks with unused gaps in between
them. This can occur, for example, when an application allocates three blocks in succession and then frees
the one in the middle. The memory allocator might reuse the middle block for future allocations, but it is
no longer possible to allocate a block as large as all free memory.
Provided the memory allocator doesn't change its implementation or rounding policy in run time, the
overhead and internal fragmentation remains constant throughout an application's lifetime. While overhead
and internal fragmentation may be undesired (because they waste memory), external fragmentation is the
real enemy of embedded systems. External fragmentation is the allocation problem that kills systems.
Whenever the word fragmentation is used throughout the rest of this discussion, think external fragmentation

Definition

Now we are ready for a definition of (external) memory fragmentation. Other definitions are possible, but

this is the one most commonly used. It applies to external fragmentation, but can be modified to include

internal fragmentation (by including internal fragmentation in the denominator).

Figure 5.1 Definition of Memory Fragmentation


Fragmentation is a fraction between 0 and 1. A system in which fragmentation = 1 (100%) is completely

out of memory. With all free memory in a single block (the largest), fragmentation is 0%. With one quarter

of all free memory in the largest block, fragmentation is 75%. For example, fragmentation is 99% in a

system with 5 MB of free memory, when the largest block available for allocation is 50 KB.


有朋友已经翻译出来,喜欢中文的请见:

http://blog.201314.info/2010/04/25/%E8%BD%AC%EF%BC%9A%E5%86%85%E5%AD%98%E7%A2%8E%E7%89%87%E4%BA%A7%E7%94%9F%E5%8E%9F%E5%9B%A0%E5%8F%8A%E5%A4%84%E7%90%86%E6%8A%80%E6%9C%AF.html


<think>好的,我现在需要帮助用户解答关于Redis内存碎片查看方法以及used_memory_overhead具体含义的问题。首先,我需要回顾用户之前的提问和我的回答,确保信息的一致性。 用户之前询问了used_memory_overhead的占用情况,我解释了其定义、组成结构、验证方法和优化建议。现在用户进一步想了解如何查看内存碎片以及used_memory_overhead的具体组成,同时引用了三个相关的资料片段。 首先,确定用户的需求:他们希望了解如何查看Redis的内存碎片率,以及used_memory_overhead包含哪些内容。需要结合之前的回答和引用资料来提供准确的步骤和解释。 查看内存碎片率的方法,根据引用[2]和[3],可以通过`info memory`命令获取mem_fragmentation_ratio,这个指标是used_memory_rss与used_memory的比值。需要详细说明每个参数的意义,并给出示例,如引用[3]中的输出,解释碎片率的计算方法。 关于used_memory_overhead的组成,我之前已经分点列出了内部数据结构、客户端资源、复制缓冲等。需要再次强调这些组成部分,并结合引用[1]中提到的内存碎片清理问题,说明used_memory_overhead可能包含内存碎片,以及如何通过`memory purge`或重启来优化。 需要注意的是,用户可能对如何具体操作和解读命令结果有疑问,因此需要分步骤说明命令的使用和输出中的关键字段。同时,要给出优化建议,如调整配置参数、限制客户端缓冲区、处理碎片等,引用之前的建议和用户提供的资料中的方法。 还要检查是否有遗漏的信息,例如引用[3]中的具体info memory输出,是否需要在回答中展示示例,帮助用户更直观地理解。此外,确保回答结构清晰,分点列出查看方法和组成结构,并附带相关问题的建议,保持与之前回答的一致性。 最后,确保语言简洁明了,避免技术术语过于复杂,同时保持专业性。验证所有信息是否正确,尤其是数值和命令的正确性,避免误导用户。比如,内存碎片率的计算公式是否正确,used_memory_overhead的组成部分是否有遗漏等。 总结来说,回答需要分为两部分:内存碎片的查看方法和used_memory_overhead的组成。每部分都需要步骤说明、命令示例、关键指标解释,并结合引用资料中的信息进行补充。最后给出优化建议和相关问题,帮助用户进一步解决问题。</think>### 一、Redis内存碎片的查看方法 #### 1. 核心指标与计算公式 内存碎片率通过以下公式计算(引用自[^2][^3]): $$mem\_fragmentation\_ratio = \frac{used\_memory\_rss}{used\_memory}$$ - **$used\_memory$**:Redis分配器实际申请的内存(通过`jemalloc`等分配器管理) - **$used\_memory\_rss$**:操作系统视角下Redis进程占用的物理内存 - **$mem\_fragmentation\_ratio$**:若该值 **>1.5** 表示存在显著内存碎片,**<1** 表示内存被压缩(如启用了`jemalloc`的透明大页) #### 2. 查看步骤 通过`redis-cli`执行以下命令查看详细内存分布: ```bash > info memory # 查看所有内存指标 > info memory | grep "mem_fragmentation_ratio" # 直接筛选碎片率 ``` 示例输出(引用自[^3]): ```plaintext mem_fragmentation_ratio:1.54 # 表示54%的内存碎片 used_memory:1073741736 # 数据内存+管理内存(单位字节) used_memory_rss:1650737152 # 操作系统视角的实际占用 ``` #### 3. 碎片率解读 | 碎片率范围 | 状态说明 | 处理建议 | |----------------|----------------------------|---------------------------| | 1.0~1.1 | 理想状态 | 无需操作 | | 1.1~1.5 | 可控范围 | 监控观察 | | **>1.5** | 高碎片率(如示例中的1.54) | 需执行内存清理或重启实例 | | <1.0 | 内存压缩(Swap风险) | 检查是否开启透明大页 | --- ### 二、`used_memory_overhead`的具体含义与组成 #### 1. 定义公式 $$used\_memory\_overhead = used\_memory - used\_memory\_dataset$$ 该指标表示Redis为维护数据存储和运行所需的管理性内存开销(引用自[^1][^3])。 #### 2. 核心组成部分 | 组成部分 | 说明 | 典型场景示例 | |----------------------|--------------------------------------------------------------------|---------------------------| | **全局哈希表** | 存储所有Key的元信息(每个Key约$104B$) | 10万Key占用约$10MB$ | | **客户端缓冲区** | 每个客户端连接默认分配输入/输出缓冲区(`client-output-buffer-limit`控制) | 1000连接占用约$64MB$ | | **复制缓冲区** | 主节点为每个从节点维护的$1MB$复制积压缓冲区(`repl-backlog-size`可调) | 10从节点占用$10MB$ | | **Lua脚本缓存** | 存储已执行但未释放的Lua脚本 | 大脚本可能占用数十MB | | **持久化缓冲区** | RDB/AOF生成时的临时内存占用 | AOF重写期间显著增加 | | **内存碎片** | 分配器未能有效利用的内存间隙(如大量删除Key后产生) | 碎片率>1.5时需主动清理[^1] | --- ### 三、优化建议 1. **内存碎片清理** - 执行`memory purge`强制释放碎片(依赖`jemalloc`支持)[^1] - 重启实例(极端情况下) 2. **控制`used_memory_overhead`** - 通过`hash-max-ziplist-entries`优化小对象存储编码 - 限制客户端缓冲区:`config set client-output-buffer-limit "normal 256mb 128mb 60"` - 减少冗余Key:定期清理过期或无用Key --- ### 四、操作验证示例 1. 查看内存分布详情: ```bash > MEMORY STATS # 输出包含数据集内存、管理内存碎片率等 ``` 2. 主动清理碎片(需Redis 4.0+): ```bash > memory purge # 触发jemalloc内存整理 ``` ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值