Ascend C

原理

e.g. 基于Ascend C方式实现矢量算子

一、算子分析

步骤1

Add算子的数学表达式为:

z = x + y

计算逻辑是:

Ascend C提供的矢量计算接口的操作元素都为LocalTensor,输入数据需 要先搬运进片上存储,然后使用计算接口完成两个输入参数相加,得到最终结果,再搬出到外部存储上。

步骤2 :明确输入和输出

● Add算子有两个输入:x与y,输出为z。

● 算子的输入支持的数据类型为half(float16),算子输出的数据类型与输入数据类型相同。

● 算子输入支持shape(8,2048),输出shape与输入shape相同。

● 算子输入支持的format为:ND。

步骤3 :确定核函数名称和参数

● 自定义核函数名称,核函数命名为add_custom。

● 根据对算子输入输出的分析,确定核函数有3个参数x,y,z;x,y为输入在 Global Memory上的内存地址,z为输出在Global Memory上的内存地址。

步骤4 :确定算子实现所需接口

● 实现涉及外部存储和内部存储间的数据搬运,查看Ascend C API参考中的数据搬移接口

● 查看Ascend C API参考中的矢量计算接口

● 计算中使用到的Tensor数据结构,使用Queue队列进行管理

二、核函数定义

核函数是主机端和设备端连接的桥梁 , 是Ascend C算子设备侧实现的入口!!!

当核函数被调用时,多个核都执行相同的核函数代码,具有相同的参数,并行执行。

步骤1 :  核函数原型定义
extern "C" __global__ __aicore__ void add_custom(GM_ADDR x, GM_ADDR y, GM_ADDR z)
{
    // 使用__global__函数类型 限定符来标识它是一个核函数,可以被<<<...>>>调用
    // 使用__aicore__函数类型限定符,来标识该核函数在设备侧 aicore 上执行
    // 统一使用GM_ADDR宏修饰入参
}

注意各种函数的调用关系!!!

步骤2 :  调用算子类的Init和Process函数

算子类的Init函数,完成内存初始化相关工作,Process函数完成算子实现的核心逻辑

extern "C" __global__ __aicore__ void add_custom(GM_ADDR x, GM_ADDR y, GM_ADDR z)
{//核函数
    KernelAdd op;
    op.Init(x, y, z);    // device侧执行函数
    op.Process();        // device侧执行函数
}
步骤3 :  对核函数的调用进行封装,得到add_custom_do函数,便于主程序调用
#ifndef __CCE_KT_TEST__   
//call of kernel function 
//表示该封装函数仅在编译运行NPU侧的算子时会用到
//编译运行CPU侧的算子时,可以直接调用add_custom函数
void add_custom_do(uint32_t blockDim, void* l2ctrl, void* stream, uint8_t* x, uint8_t* y, uint8_t* z)
{
    add_custom<<<blockDim, l2ctrl, stream>>>(x, y, z);
    // 核函数使用内核调用符<<<...>>>这种语法形式,来规定核函数的执行配置
    // 内核调用符仅可在NPU侧编译时调用,CPU侧编译无法识别该符号。
}
#endif

三、算子类实现

如何基于编程范式实现算子类?

答:根据流水线编程范式,流程如下

class KernelAdd {
public:
    __aicore__ inline KernelAdd() {}
    // 初始化函数,完成内存初始化相关操作
    __aicore__ inline void Init(GM_ADDR x, GM_ADDR y, GM_ADDR z){}
    // 核心处理函数,实现算子逻辑,调用私有成员函数CopyIn、Compute、CopyOut完成矢量算子的三级流水操作
    __aicore__ inline void Process(){}
private:
    // 搬入函数,完成CopyIn阶段的处理,被核心Process函数调用
    __aicore__ inline void CopyIn(int32_t progress){}
    // 计算函数,完成Compute阶段的处理,被核心Process函数调用
    __aicore__ inline void Compute(int32_t progress){}
    // 搬出函数,完成CopyOut阶段的处理,被核心Process函数调用
    __aicore__ inline void CopyOut(int32_t progress){}
private:
    TPipe pipe; //Pipe内存管理对象
    TQue<QuePosition::VECIN, BUFFER_NUM> inQueueX, inQueueY; //输入数据Queue队列管理对象QuePosition为VECIN
    TQue<QuePosition::VECOUT, BUFFER_NUM> outQueueZ; //输出数据Queue队列管理对象QuePosition为VECOUT
    GlobalTensor<half> xGm, yGm, zGm; //管理输入输出Global Memory内存地址的对象,其中xGm, yGm为输入,zGm为输出
};
 Init 函数实现

任务1:设置输入输出Global Tensor的Global Memory内存地址。

任务2:通过Pipe内存管理对象为输入输出Queue分配内存。

获取该核函数需要处理的输入输出在Global Memory上的内存偏移地址,
并将该偏移地址设置在Global Tensor中。
constexpr int32_t TOTAL_LENGTH = 8 * 2048; // total length of data
constexpr int32_t USE_CORE_NUM = 8; // num of core used
constexpr int32_t BLOCK_LENGTH = TOTAL_LENGTH / USE_CORE_NUM; // length computed of each core
constexpr int32_t TILE_NUM = 8; // split data into 8 tiles for each core
constexpr int32_t BUFFER_NUM = 2; // tensor num for each queue
constexpr int32_t TILE_LENGTH = BLOCK_LENGTH / TILE_NUM / BUFFER_NUM; // seperate to 2 parts, due to double buffer
__aicore__ inline void Init(GM_ADDR x, GM_ADDR y, GM_ADDR z)
{
 // get start index for current core, core parallel
 xGm.SetGlobalBuffer((__gm__ half*)x + BLOCK_LENGTH * GetBlockIdx(), BLOCK_LENGTH);  //获取输入x在Global Memory上的内存偏移地址
 yGm.SetGlobalBuffer((__gm__ half*)y + BLOCK_LENGTH * GetBlockIdx(), BLOCK_LENGTH);
 zGm.SetGlobalBuffer((__gm__ half*)z + BLOCK_LENGTH * GetBlockIdx(), BLOCK_LENGTH);
 // pipe alloc memory to queue, the unit is Bytes
 pipe.InitBuffer(inQueueX, BUFFER_NUM, TILE_LENGTH * sizeof(half));
 pipe.InitBuffer(inQueueY, BUFFER_NUM, TILE_LENGTH * sizeof(half));
 pipe.InitBuffer(outQueueZ, BUFFER_NUM, TILE_LENGTH * sizeof(half));
}
Process 函数实现

基于矢量编程范式,将核函数的实现分为3个基本任务:CopyIn,Compute, CopyOut。Process函数中通过如下方式调用这三个函数。

__aicore__ inline void Process(){
    // loop count need to be doubled, due to double buffer
    constexpr int32_t loopCount = TILE_NUM * BUFFER_NUM;
    // tiling strategy, pipeline parallel
    for(int32_t i = 0; i < loopCount; i++){
        CopyIn(i);
        Compute(i);
        CopyOut(i);
    }
}
CopyIn函数实现 
// CopyIn函数实现
__aicore__ inline void CopyIn(int32_t progress)
{
    // 分配内存给inQueueX,inQueueY
    LocalTensor<half> xLocal = inQueueX.AllocTensor<half>();    
    LocalTensor<half> yLocal = inQueueY.AllocTensor<half>();
    // copy progress_th tile from global tensor to local tensor
    DataCopy(xLocal, xGm[progress * TILE_LENGTH], TILE_LENGTH);
    DataCopy(yLocal, yGm[progress * TILE_LENGTH], TILE_LENGTH);
    // enque input tensors to VECIN queue
    inQueueX.EnQue(xLocal);
    inQueueY.EnQue(yLocal);
}
Compute函数实现
_aicore__ inline void Compute(int32_t progress)
{
    // deque input tensors from VECIN queue
    LocalTensor<half> xLocal = inQueueX.DeQue<half>();
    LocalTensor<half> yLocal = inQueueY.DeQue<half>();
    LocalTensor<half> zLocal = outQueueZ.AllocTensor<half>();
    // call Add instr for computation
    Add(zLocal, xLocal, yLocal, TILE_LENGTH);
    // enque the output tensor to VECOUT queue
    outQueueZ.EnQue<half>(zLocal);
    // free input tensors for reuse
    inQueueX.FreeTensor(xLocal);
    inQueueY.FreeTensor(yLocal);
}
CopyOut函数实现
__aicore__ inline void CopyOut(int32_t progress)
{
    // deque output tensor from VECOUT queue
    LocalTensor<half> zLocal = outQueueZ.DeQue<half>();
    // copy progress_th tile from local tensor to global tensor
    DataCopy(zGm[progress * TILE_LENGTH], zLocal, TILE_LENGTH);
    // free output tensor for reuse
    outQueueZ.FreeTensor(zLocal);
}

完整样例

/*
 * Copyright (c) Huawei Technologies Co., Ltd. 2022-2023. All rights reserved.
 *
 * Function : z = x + y
 * This sample is a very basic sample that implements vector add on Ascend plaform.
 * In this sample:
 * Length of x / y / z is 8*2048.
 * Num of vector core used in sample is 8.
 * Length for each core to compute is 2048.
 * Tiles for each core is 8 which means we add 2048/8=256 elements in one loop.
 *
 * This is just a tile strategy for demonstration, in fact we can compute at most 128*255
 * elements in one loop for b16 type.
*/
#include "kernel_operator.h"
using namespace AscendC;
constexpr int32_t TOTAL_LENGTH = 8 * 2048; // total length of data
constexpr int32_t USE_CORE_NUM = 8; // num of core used
constexpr int32_t BLOCK_LENGTH = TOTAL_LENGTH / USE_CORE_NUM; // length computed of each
core
constexpr int32_t TILE_NUM = 8; // split data into 8 tiles for each core
constexpr int32_t BUFFER_NUM = 2; // tensor num for each queue
constexpr int32_t TILE_LENGTH = BLOCK_LENGTH / TILE_NUM / BUFFER_NUM; // seperate to 2 parts, due
to double buffer
class KernelAdd {
	public:
		__aicore__ inline KernelAdd() {}
		__aicore__ inline void Init(GM_ADDR x, GM_ADDR y, GM_ADDR z) {
			// get start index for current core, core parallel
			xGm.SetGlobalBuffer((__gm__ half*)x + BLOCK_LENGTH * GetBlockIdx(), BLOCK_LENGTH);
			yGm.SetGlobalBuffer((__gm__ half*)y + BLOCK_LENGTH * GetBlockIdx(), BLOCK_LENGTH);
			zGm.SetGlobalBuffer((__gm__ half*)z + BLOCK_LENGTH * GetBlockIdx(), BLOCK_LENGTH);
			// pipe alloc memory to queue, the unit is Bytes
			pipe.InitBuffer(inQueueX, BUFFER_NUM, TILE_LENGTH * sizeof(half));
			pipe.InitBuffer(inQueueY, BUFFER_NUM, TILE_LENGTH * sizeof(half));
			pipe.InitBuffer(outQueueZ, BUFFER_NUM, TILE_LENGTH * sizeof(half));
		}
		__aicore__ inline void Process() {
			// loop count need to be doubled, due to double buffer
			constexpr int32_t loopCount = TILE_NUM * BUFFER_NUM;
			// tiling strategy, pipeline parallel
			for (int32_t i = 0; i < loopCount; i++) {
				CopyIn(i);
				Compute(i);
				CopyOut(i);
			}
		}
	private:
		__aicore__ inline void CopyIn(int32_t progress) {
			// alloc tensor from queue memory
			LocalTensor<half> xLocal = inQueueX.AllocTensor<half>();
			LocalTensor<half> yLocal = inQueueY.AllocTensor<half>();
			// copy progress_th tile from global tensor to local tensor
			DataCopy(xLocal, xGm[progress * TILE_LENGTH], TILE_LENGTH);
			DataCopy(yLocal, yGm[progress * TILE_LENGTH], TILE_LENGTH);
			// enque input tensors to VECIN queue
			inQueueX.EnQue(xLocal);
			inQueueY.EnQue(yLocal);
		}
		__aicore__ inline void Compute(int32_t progress) {
			// deque input tensors from VECIN queue
			LocalTensor<half> xLocal = inQueueX.DeQue<half>();
			LocalTensor<half> yLocal = inQueueY.DeQue<half>();
			LocalTensor<half> zLocal = outQueueZ.AllocTensor<half>();
			// call Add instr for computation
			Add(zLocal, xLocal, yLocal, TILE_LENGTH);
			// enque the output tensor to VECOUT queue
			outQueueZ.EnQue<half>(zLocal);
			// free input tensors for reuse
			inQueueX.FreeTensor(xLocal);
			inQueueY.FreeTensor(yLocal);
		}
		__aicore__ inline void CopyOut(int32_t progress) {
			// deque output tensor from VECOUT queue
			LocalTensor<half> zLocal = outQueueZ.DeQue<half>();
			// copy progress_th tile from local tensor to global tensor
			DataCopy(zGm[progress * TILE_LENGTH], zLocal, TILE_LENGTH);
			// free output tensor for reuse
			outQueueZ.FreeTensor(zLocal);
		}
	private:
		TPipe pipe;
		// create queues for input, in this case depth is equal to buffer num
		TQue<QuePosition::VECIN, BUFFER_NUM> inQueueX, inQueueY;
		// create queue for output, in this case depth is equal to buffer num
		TQue<QuePosition::VECOUT, BUFFER_NUM> outQueueZ;
		GlobalTensor<half> xGm, yGm, zGm;
};
// implementation of kernel function
extern "C" __global__ __aicore__ void add_custom(GM_ADDR x, GM_ADDR y, GM_ADDR z) {
	KernelAdd op;
	op.Init(x, y, z);
	op.Process();
}
#ifndef __CCE_KT_TEST__
// call of kernel function
void add_custom_do(uint32_t blockDim, void* l2ctrl, void* stream, uint8_t* x, uint8_t* y, uint8_t* z) {
	add_custom<<<blockDim, l2ctrl, stream>>>(x, y, z);
}
#endif

  • 10
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值