今天尝试理解slot mapping的作用, 参考GitHub - MDK8888/vllmini: A minimal implementation of vllm.
为了理解它,尝试参考源代码构造一个简单的slot mapping
import torch
# Example parameters
seq_len = 16 # Length of each sequence block
block_size = 8 # Size of each block
allocated = [0, 1, 2] # Indices of allocated blocks
# Assuming self.block_size is defined
self = type('', (), {})() # Create an empty object
self.block_size = block_size
# Create the slot mappings
slot_mappings = [torch.arange(seq_len, dtype=torch.long) + block * self.block_size for block in allocated]
# Print the result
for i, mapping in enumerate(slot_mappings):
print(f"Layer {allocated[i]}: {mapping}")
运行 结果如下:
allocated 对应三个Layer的kv cache. 每一个layer对应一个block table
layer 0: tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15])
layer 1: tensor([ 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23])
layer 2: tensor([16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31])
这表示,基于block size为8的kv cache,sequence length为16时,slot mapping的分配. 其中,seq将占据两个block,每个slot mapping中sequnce对应的block index就是[0] *8, [1] *8, 而在block中的index对因的就是[0,1,2,3,4,5,6,7,8]..
另外,这里allocated对应的通常是transformer的网络层数,每一层对应一个block table