3D动作或行为识别模型需要完成一个动作或者行为的相关的一定数量的连续画面作为输入来进行推理,从视频中抽取的帧一般转换后都是RGB格式的,连续多帧数据的顺序排列从通道看就是RGBRGBRGB...这样的排列顺序,但是有的3D动作或行为识别模型要求输入的数据格式是RRR...GGG...BBB...这样的通道顺序,那怎么把前面的格式转换成后面的格式呢?
这里举例是基于把模型优化转换成TensorRT engine从而使用TensorRT来调用模型,TensorRT调用模型时存放模型输入数据的input buffer是线性存放数据的,首先要搞清楚线性存放一张RGB图片时的数据组织顺序,这点没弄明白后面的就看不懂。如果把图片的原始三通道数据看成是一个WxHxC的三维数组data的话,在input buffer里线性存放的顺序是:
data[0][0][0]、data[0][0][1]、data[0][0][2]、...、 data[0][W-1][0]、data[0][W-1][1]、data[0][W-1][2]
data[1][0][0]、data[1][0][1]、data[1][0][2]、...、data[1][W-1][0]、data[1][W-1][1]、data[1][W-1][2]
...
data[H-1][0][0]、data[H-1][0][1]、data[H-1][0][2]、...、data[H-1][W-1][0]、data[H-1][W-1][1]、data[H-1][W-1][2]
对于依次连续存放在input buffer多张RGB图片要转换成RRR...GGG...BBB...这样的存放格式比较抽象,转换对应关系你可能有点蒙,我们先看比较简单的,假设只有两张RGB图片,其数据对应的三维数组为data1和data2,如果按RRGGBB格式线性存放到input buffer里,那么顺序应该是:
data1[0][0][0]、data1[0][1][0]、data1[0][2][0]、...、 data1[0][W-1][0]
...
data1[H-1][0][0]、data1[H-1][1][0]、data1[H-1][2][0]、...、 data1[H-1][W-1][0]
data2[0][0][0]、data2[0][1][0]、data2[0][2][0]、...、 data2[0][W-1][0]
...
data2[H-1][0][0]、data2[H-1][1][0]、data2[H-1][2][0]、...、 data2[H-1][W-1][0]
data1[0][0][1]、data1[0][1][1]、data1[0][2][1]、...、 data1[0][W-1][1]
...
data1[H-1][0][1]、data1[H-1][1][1]、data1[H-1][2][1]、...、 data1[H-1][W-1][1]
data2[0][0][1]、data2[0][1][1]、data2[0][2][1]、...、 data2[0][W-1][1]
...
data2[H-1][0][1]、data2[H-1][1][1]、data2[H-1][2][1]、...、 data2[H-1][W-1][1]
data1[0][0][2]、data1[0][1][2]、data1[0][2][2]、...、 data1[0][W-1][2]
...
data1[H-1][0][2]、data1[H-1][1][2]、data1[H-1][2][2]、...、 data1[H-1][W-1][2]
data2[0][0][2]、data2[0][1][2]、data2[0][2][2]、...、 data2[0][W-1][2]
...
data2[H-1][0][2]、data2[H-1][1][2]、data2[H-1][2][2]、...、 data2[H-1][W-1][2]
假设模型要求输入的图片的宽为W、高为H、通道数为outC、原始图片本身的通道数为inC、要求输入的连续帧数为SeqLength、存放原始图片的buffer为inPtr、存放模型输入数据的buffer为outPtr,归一化所需的缩放系数为scales(例如1/255,分通道存放,通道数为inC)、要减掉的均值为means(分通道存放,通道数为inC)、通道方向每个block的线程数为outC、宽度方向每个block的线程数为threadsPerBlockW、高度方向每个block的线程数为threadsPerBlockH,对于原始图片中的某个像素点(w,h,c),它线性存储在inPtr中的位置则为
unsigned int inIdx = h * inRowPitch + w * inC + c;
它在转换后在outPtr中的位置为
unsigned int outIdx = c * SHW + h * W + w;
其中 inRowPitch = W * inC,SHW = SeqLength * H * W。
用CUDA核函数来实现的完整代码如下:
__device__ __constant__ unsigned int kSwapChannels[3] = {2, 1, 0};
template <bool swap>
__device__ __forceinline__ unsigned int inChannel(unsigned int i);
template <>
__device__ __forceinline__ unsigned int
inChannel<false>(unsigned int i)
{
return i;
}
template <>
__device__ __forceinline__ unsigned int
inChannel<true>(unsigned int i)
{
return kSwapChannels[i];
}
template <typename OutT, typename InT, bool swapChannel>
__global__ void
kernelSHWCToCSHW(
OutT* out, unsigned int H, unsigned int W, unsigned int SHW, const InT* in,
unsigned int inC, unsigned int inRowPitch, float* scales, float* means)
{
unsigned int c = threadIdx.x;
unsigned int w = blockIdx.x * blockDim.y + threadIdx.y;
unsigned int h = blockIdx.y * blockDim.z + threadIdx.z;
if (w >= W || h >= H) {
return;
}
unsigned int inIdx = h * inRowPitch + w * inC + inChannel<swapChannel>(c);
float inData = 0.0f;
if (c < inC) {
inData = (float)in[inIdx];
}
unsigned int outIdx = c * SHW + h * W + w;
float val = inData * scales[c] + means[c];
out[outIdx] = val;
}
cudaError_t
convert(
void* outPtr, unsigned int outC, unsigned int H, unsigned int W, unsigned int SeqLength,
const void* inPtr, unsigned int inC, float* scales, float* means,
bool swapChannel, cudaStream_t stream, int threadsPerBlockH, int threadsPerBlockW)
{
unsigned int HW = H * W;
unsigned int SHW = SeqLength * HW;
unsigned int inRowPitch = W * inC;
for (int i = 0; i < inC; ++i) {
means[i] = -scales[i] * means[i];
}
dim3 blocks(outC, threadsPerBlockW, threadsPerBlockH);
dim3 grids(
(W + threadsPerBlockW - 1) / threadsPerBlockW,
(H + threadsPerBlockH - 1) / threadsPerBlockH, 1);
if (swapChannel) {
kernelSHWCToCSHW<float, unsigned char, true><<<grids, blocks, 0, stream>>>(
(float*)outPtr, H, W, SHW, (const unsigned char*)inPtr, inC, inRowPitch, scales,
means);
} else {
kernelSHWCToCSHW<float, unsigned char, false><<<grids, blocks, 0, stream>>>(
(float*)outPtr, H, W, SHW, (const unsigned char*)inPtr, inC, inRowPitch, scales,
means);
}
return cudaGetLastError();
}
考虑到兼容处理原始图片是BGR格式的情况,可以通过swapChannel参数指定是否需要交换通道,这些代码是在NVIDIA的Deepstream里的示例代码基础上修改的,以便于在我们自己的程序中独立调用,无需依赖于任何Deepstream的东西。
另外,对于不使用核函数如何简单地将序列RGB格式图片或者BGR格式图片按RRR...GGG...BBB...格式线性存储到TensorRT调用模型时的Input Buffer里去,给个示例代码:
bool ARNet::processInput(const samplesCommon::BufferManager& buffers,
vector<cv::Mat>* p_cvImgs) {
const vector<cv::Mat>& cvImgs = (const vector<cv::Mat>&)(*p_cvImgs);
//input shape: 1x3x16x224x224
const int inputH = mInputDims.d[3]; // 224
const int inputW = mInputDims.d[4]; // 224
const int length = mInputDims.d[2]; // 16
const int channels = mInputDims.d[1]; // 3
...
float* hostDataBuffer =
static_cast<float*>(buffers.getHostBuffer(mParams.inputTensorNames[0]));
float* data = hostDataBuffer;
for (int len = 0; len < length; len++) {
cv::Mat cv_img_origin = cvImgs[len];
cv::Mat cv_img = cv::Mat::zeros(inputH,inputW,CV_8UC3);
... // make a letter box with cv_img_origin and store it into cv_img
cv::Mat floatImg;
cv_img.convertTo(floatImg, CV_32FC3);
vector<cv::Mat> ch3Img;
for (int i = 0; i < channels; ++i) {
// prt + c*S*W*H + n*W*H
data = hostDataBuffer + len * inputW * inputH + i * length * inputW * inputH;
cv::Mat channel(inputH, inputW, CV_32FC1, data);
ch3Img.push_back(channel);
}
cv::split(floatImg, ch3Img);
normalize(ch3Img, m_mean_val, inputH, inputW, channels, false); // true: swapchannel
}