理解slot mapping

今天尝试理解slot mapping的作用, 参考GitHub - MDK8888/vllmini: A minimal implementation of vllm.

为了理解它,尝试参考源代码构造一个简单的slot mapping

import torch

# Example parameters

seq_len = 16  # Length of each sequence block
block_size = 8  # Size of each block
allocated = [0, 1, 2]  # Indices of allocated blocks

# Assuming self.block_size is defined

self = type('', (), {})()  # Create an empty object
self.block_size = block_size

# Create the slot mappings

slot_mappings = [torch.arange(seq_len, dtype=torch.long) + block * self.block_size for block in allocated]

# Print the result

for i, mapping in enumerate(slot_mappings):
    print(f"Layer {allocated[i]}: {mapping}")

运行 结果如下:

allocated 对应三个Layer的kv cache. 每一个layer对应一个block table 

layer 0: tensor([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15])

layer 1: tensor([ 8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23])

layer 2: tensor([16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31])

这表示,基于block size为8的kv cache,sequence length为16时,slot mapping的分配. 其中,seq将占据两个block,每个slot mapping中sequnce对应的block index就是[0] *8, [1] *8, 而在block中的index对因的就是[0,1,2,3,4,5,6,7,8]..

另外,这里allocated对应的通常是transformer的网络层数,每一层对应一个block table

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值