【代码分析】TensorRT sampleINT8 详解

目录

 

前言

 

代码分析

Main入口

构建(Build)网络

BatchStream

推理(Infer)过程

资源释放


 

前言

TensorRT可以通过INT8量化处理网络,然后大幅加速网络推理速度,本文旨在详细分析MNIST INT8 Sample 的代码,解释如何使用TensorRT 对网络做INT8 量化处理。

关于INT8 量化的背景知识可以参考博文TensorRT INT8校准与量化原理

 

代码分析

sampleINT8的github 代码参考link: https://github.com/NVIDIA/TensorRT/tree/release/6.0/samples/opensource/sampleINT8

程序的主要流程分为 main与程序输入参数初始化 -> 网络构建 -> 网络推理 -> 释放资源结束 这几个阶段,下面逐个阶段分析代码
 

Main入口

//!
//! \brief Initializes members of the params struct using the command line args
//!
SampleINT8Params initializeSampleParams(const samplesCommon::Args& args, int batchSize)
{
    SampleINT8Params params;
    // Use directories provided by the user, in addition to default directories.
    params.dataDirs = args.dataDirs;
    params.dataDirs.emplace_back("data/mnist/");
    params.dataDirs.emplace_back("int8/mnist/");
    params.dataDirs.emplace_back("samples/mnist/");
    params.dataDirs.emplace_back("data/samples/mnist/");
    params.dataDirs.emplace_back("data/int8/mnist/");
    params.dataDirs.emplace_back("data/int8_samples/mnist/");

    params.batchSize = batchSize;
    params.dlaCore = args.useDLACore;
    params.nbCalBatches = 10;
    params.calBatchSize = 50;
    params.inputTensorNames.push_back("data");
    params.outputTensorNames.push_back("prob");
    params.prototxtFileName = "deploy.prototxt";
    params.weightsFileName = "mnist_lenet.caffemodel";
    params.networkName = "mnist";
    return params;
}

//!
//! \brief Prints the help information for running this sample
//!
void printHelpInfo()
{
    std::cout << "Usage: ./sample_int8 [-h or --help] [-d or --datadir=<path to data directory>] "
                 "[--useDLACore=<int>]"
              << std::endl;
    std::cout << "--help          Display help information" << std::endl;
    std::cout << "--datadir       Specify path to a data directory, overriding the default. This option can be used "
                 "multiple times to add multiple directories."
              << std::endl;
    std::cout << "--useDLACore=N  Specify a DLA engine for layers that support DLA. Value can range from 0 to n-1, "
                 "where n is the number of DLA engines on the platform."
              << std::endl;
    std::cout << "batch=N         Set batch size (default = 32)." << std::endl;
    std::cout << "start=N         Set the first batch to be scored (default = 100). All batches before this batch will "
                 "be used for calibration."
              << std::endl;
    std::cout << "score=N         Set the number of batches to be scored (default = 400)." << std::endl;
}

int main(int argc, char** argv)
{
    if (argc >= 2 && (!strncmp(argv[1], "help", 4) || !strncmp(argv[1], "--help", 6) || !strncmp(argv[1], "--h", 3)))
    {
        printHelpInfo();
        return EXIT_FAILURE;
    }

    // By default we score over 40K images starting at 3200, so we don't score those used to search calibration
    int batchSize = 32;
    int firstScoreBatch = 100;
    int nbScoreBatches = 400;

    // Parse extra arguments
    for (int i = 1; i < argc; ++i)
    {
        if (!strncmp(argv[i], "batch=", 6))
        {
            batchSize = atoi(argv[i] + 6);
        }
        else if (!strncmp(argv[i], "start=", 6))
        {
            firstScoreBatch = atoi(argv[i] + 6);
        }
        else if (!strncmp(argv[i], "score=", 6))
        {
            nbScoreBatches = atoi(argv[i] + 6);
        }
    }

    if (batchSize > 128)
    {
        gLogError << "Please provide batch size <= 128" << std::endl;
        return EXIT_FAILURE;
    }

    if ((firstScoreBatch + nbScoreBatches) * batchSize > 500000)
    {
        gLogError << "Only 50000 images available" << std::endl;
        return EXIT_FAILURE;
    }

    samplesCommon::Args args;
    samplesCommon::parseArgs(args, argc, argv);

    SampleINT8 sample(initializeSampleParams(args, batchSize));

......
  • 检查程序输入参数,如果不符合要求,则print help的提示信息
  • 通过initializeSampleParams函数设置默认参数的值
int main(int argc, char** argv)
{
......

    std::vector<std::string> dataTypeNames = {"FP32", "FP16", "INT8"};
    std::vector<DataType> dataTypes = {DataType::kFLOAT, DataType::kHALF, DataType::kINT8};
    std::vector<std::pair<float, float>> scores(3, std::make_pair(0.0f, 0.0f));
    for (size_t i = 0; i < dataTypes.size(); i++)
    {
        gLogInfo << dataTypeNames[i] << " run:" << nbScoreBatches << " batches of size " << batchSize << " starting at "
                 << firstScoreBatch << std::endl;

        if (!sample.build(dataTypes[i]))
        {
            if (!sample.isSupported(dataTypes[i]))
            {
                gLogWarning << "Skipping " << dataTypeNames[i] << " since the platform does not support this data type."
                            << std::endl;
                continue;
            }
            return gLogger.reportFail(sampleTest);
        }
        if (!sample.infer(scores[i], firstScoreBatch, nbScoreBatches))
        {
            return gLogger.reportFail(sampleTest);
        }
    }

......
  •  根据FP32, FP16, INT8 三种数据类型来构建网络和推理网络

 

构建(Build)网络

bool SampleINT8::build(DataType dataType)
{

    auto builder = SampleUniquePtr<nvinfer1::IBuilder>(nvinfer1::createInferBuilder(gLogger.getTRTLogger()));
    if (!builder)
    {
        return false;
    }

    auto network = SampleUniquePtr<nvinfer1::INetworkDefinition>(builder->createNetwork());
    if (!network)
    {
        return false;
    }

    auto config = SampleUniquePtr<nvinfer1::IBuilderConfig>(builder->createBuilderConfig());
    if (!config)
    {
        return false;
    }

    auto parser = SampleUniquePtr<nvcaffeparser1::ICaffeParser>(nvcaffeparser1::createCaffeParser());
    if (!parser)
    {
        return false;
    }

    if ((dataType == DataType::kINT8 && !builder->platformHasFastInt8())
        || (dataType == DataType::kHALF && !builder->platformHasFastFp16()))
    {
        return false;
    }

    auto constructed = constructNetwork(builder, network, config, parser, dataType);

......
  • TensorRT的标准流程,创建IBuilder -> 创建INetworkDefinition -> 创建IBuilderConfig -> 创建caffe模型的分析器ICaffeParser,判断硬件平台是否支持native FP16或INT8
  • 通过consstructNetwork函数分析caffe模型,构建网络
bool SampleINT8::constructNetwork(SampleUniquePtr<nvinfer1::IBuilder>& builder,
    SampleUniquePtr<nvinfer1::INetworkDefinition>& network, SampleUniquePtr<nvinfer1::IBuilderConfig>& config,
    SampleUniquePtr<nvcaffeparser1::ICaffeParser>& parser, DataType dataType)
{
    mEngine = nullptr;
    const nvcaffeparser1::IBlobNameToTensor* blobNameToTensor
        = parser->parse(locateFile(mParams.prototxtFileName, mParams.dataDirs).c_str(),
            locateFile(mParams.weightsFileName, mParams.dataDirs).c_str(), *network,
            dataType == DataType::kINT8 ? DataType::kFLOAT : dataType);

    for (auto& s : mParams.outputTensorNames)
    {
        network->markOutput(*blobNameToTensor->find(s.c_str()));
    }

    // Calibrator life time needs to last until after the engine is built.
    std::unique_ptr<IInt8Calibrator> calibrator;

    config->setAvgTimingIterations(1);
    config->setMinTimingIterations(1);
    config->setMaxWorkspaceSize(1_GiB);
    config->setFlag(BuilderFlag::kDEBUG);
    if (dataType == DataType::kHALF)
    {
        config->setFlag(BuilderFlag::kFP16);
    }
    if (dataType == DataType::kINT8)
    {
        config->setFlag(BuilderFlag::kINT8);
    }
    builder->setMaxBatchSize(mParams.batchSize);

    if (dataType == DataType::kINT8)
    {
        MNISTBatchStream calibrationStream(mParams.calBatchSize, mParams.nbCalBatches, "train-images-idx3-ubyte",
            "train-labels-idx1-ubyte", mParams.dataDirs);
        calibrator.reset(new Int8EntropyCalibrator2<MNISTBatchStream>(
            calibrationStream, 0, mParams.networkName.c_str(), mParams.inputTensorNames[0].c_str()));
        config->setInt8Calibrator(calibrator.get());
    }
......
  • 通过parser->parse分析caffe模型分析和权重文件结合dataType是INT8还是FP 来构建network对象
  • 通过network->markOutput(*blobNameToTensor->find(s.c_str())) 标记网络output的Tensor
  • 通过config 配置网络每一层的迭代次数和网络占用的内存大小,根据dataType设置Flag
  • 通过builder->setMaxBatchSize设置网络input的batchSize
  • 如果是dataType是INT8类型,则构建BatchStream对象用于获取执行校准需要的calibration数据
  • 通过calibrator.reset(new Int8EntropyCalibrator2<MNISTBatchStream>( calibrationStream ... 构建TensorRT需要的calibration interface,TensorRT通过该interface在量化过程中获取calibration数据,关于BatchStream的详细分析如下
  • 通过config->setInt8Calibrator(calibrator.get()); 将calibration interface配置到网络的config中

BatchStream

class IBatchStream
{
public:
    virtual void reset(int firstBatch) = 0;
    virtual bool next() = 0;
    virtual void skip(int skipCount) = 0;
    virtual float* getBatch() = 0;
    virtual float* getLabels() = 0;
    virtual int getBatchesRead() const = 0;
    virtual int getBatchSize() const = 0;
    virtual nvinfer1::Dims getDims() const = 0;
};

class MNISTBatchStream : public IBatchStream
{
public:
    MNISTBatchStream(int batchSize, int maxBatches, const std::string& dataFile, const std::string& labelsFile,
        const std::vector<std::string>& directories)
        : mBatchSize{batchSize}
        , mMaxBatches{maxBatches}
        , mDims{3, 1, 28, 28} //!< We already know the dimensions of MNIST images.
    {
        readDataFile(locateFile(dataFile, directories));
        readLabelsFile(locateFile(labelsFile, directories));
    }

    void reset(int firstBatch) override
    {
        mBatchCount = firstBatch;
    }

    bool next() override
    {
        if (mBatchCount >= mMaxBatches)
        {
            return false;
        }
        ++mBatchCount;
        return true;
    }

    void skip(int skipCount) override
    {
        mBatchCount += skipCount;
    }

    float* getBatch() override
    {
        return mData.data() + (mBatchCount * mBatchSize * samplesCommon::volume(mDims));
    }

    float* getLabels() override
    {
        return mLabels.data() + (mBatchCount * mBatchSize);
    }

    int getBatchesRead() const override
    {
        return mBatchCount;
    }

    int getBatchSize() const override
    {
        return mBatchSize;
    }

    nvinfer1::Dims getDims() const override
    {
        return mDims;
    }

private:
    void readDataFile(const std::string& dataFilePath)
    {
        std::ifstream file{dataFilePath.c_str(), std::ios::binary};

        int magicNumber, numImages, imageH, imageW;
        file.read(reinterpret_cast<char*>(&magicNumber), sizeof(magicNumber));
        // All values in the MNIST files are big endian.
        magicNumber = samplesCommon::swapEndianness(magicNumber);
        assert(magicNumber == 2051 && "Magic Number does not match the expected value for an MNIST image set");

        // Read number of images and dimensions
        file.read(reinterpret_cast<char*>(&numImages), sizeof(numImages));
        file.read(reinterpret_cast<char*>(&imageH), sizeof(imageH));
        file.read(reinterpret_cast<char*>(&imageW), sizeof(imageW));

        numImages = samplesCommon::swapEndianness(numImages);
        imageH = samplesCommon::swapEndianness(imageH);
        imageW = samplesCommon::swapEndianness(imageW);

        // The MNIST data is made up of unsigned bytes, so we need to cast to float and normalize.
        int numElements = numImages * imageH * imageW;
        std::vector<uint8_t> rawData(numElements);
        file.read(reinterpret_cast<char*>(rawData.data()), numElements * sizeof(uint8_t));
        mData.resize(numElements);
        std::transform(
            rawData.begin(), rawData.end(), mData.begin(), [](uint8_t val) { return static_cast<float>(val) / 255.f; });
    }

    void readLabelsFile(const std::string& labelsFilePath)
    {
        std::ifstream file{labelsFilePath.c_str(), std::ios::binary};
        int magicNumber, numImages;
        file.read(reinterpret_cast<char*>(&magicNumber), sizeof(magicNumber));
        // All values in the MNIST files are big endian.
        magicNumber = samplesCommon::swapEndianness(magicNumber);
        assert(magicNumber == 2049 && "Magic Number does not match the expected value for an MNIST labels file");

        file.read(reinterpret_cast<char*>(&numImages), sizeof(numImages));
        numImages = samplesCommon::swapEndianness(numImages);

        std::vector<uint8_t> rawLabels(numImages);
        file.read(reinterpret_cast<char*>(rawLabels.data()), numImages * sizeof(uint8_t));
        mLabels.resize(numImages);
        std::transform(
            rawLabels.begin(), rawLabels.end(), mLabels.begin(), [](uint8_t val) { return static_cast<float>(val); });
    }

    int mBatchSize{0};
    int mBatchCount{0}; //!< The batch that will be read on the next invocation of next()
    int mMaxBatches{0};
    Dims mDims{};
    std::vector<float> mData{};
    std::vector<float> mLabels{};
};
  • 通过构造函数获得校准calibrationS数据的BatchSize和Batch number
  • 通过readDataFile读取calibrationS数据集的数据文件,包括文件中共有多少Image,每个Image的height和width,计算总共要读取的数据量numElements = numImages * imageH * imageW;,将文件的数据读取到mData
  • 通过readDataFile读取calibrationS数据集的labels文件,过程与readDataFile类似,只是将读取的数据保存到mLabels
  • next()函数返回下一个Batch的count
  • getBatch 返回当前BatchCount对应的数据指针,即数据mData偏移(mBatchCount * mBatchSize * samplesCommon::volume(mDims)) 的位置
  • getLabels() 返回当前BatchCount对应的label指针,即mLabels.data() 偏移 (mBatchCount * mBatchSize) 的位置
template <typename TBatchStream>
class EntropyCalibratorImpl
{
public:
    EntropyCalibratorImpl(
        TBatchStream stream, int firstBatch, std::string networkName, const char* inputBlobName, bool readCache = true)
        : mStream{stream}
        , mCalibrationTableName("CalibrationTable" + networkName)
        , mInputBlobName(inputBlobName)
        , mReadCache(readCache)
    {
        nvinfer1::Dims dims = mStream.getDims();
        mInputCount = samplesCommon::volume(dims) * mStream.getBatchSize();
        CHECK(cudaMalloc(&mDeviceInput, mInputCount * sizeof(float)));
        mStream.reset(firstBatch);
    }

    virtual ~EntropyCalibratorImpl()
    {
        CHECK(cudaFree(mDeviceInput));
    }

    int getBatchSize() const
    {
        return mStream.getBatchSize();
    }

    bool getBatch(void* bindings[], const char* names[], int nbBindings)
    {
        if (!mStream.next())
        {
            return false;
        }
        CHECK(cudaMemcpy(mDeviceInput, mStream.getBatch(), mInputCount * sizeof(float), cudaMemcpyHostToDevice));
        assert(!strcmp(names[0], mInputBlobName));
        bindings[0] = mDeviceInput;
        return true;
    }

    const void* readCalibrationCache(size_t& length)
    {
        mCalibrationCache.clear();
        std::ifstream input(mCalibrationTableName, std::ios::binary);
        input >> std::noskipws;
        if (mReadCache && input.good())
        {
            std::copy(std::istream_iterator<char>(input), std::istream_iterator<char>(),
                std::back_inserter(mCalibrationCache));
        }
        length = mCalibrationCache.size();
        return length ? mCalibrationCache.data() : nullptr;
    }

    void writeCalibrationCache(const void* cache, size_t length)
    {
        std::ofstream output(mCalibrationTableName, std::ios::binary);
        output.write(reinterpret_cast<const char*>(cache), length);
    }

private:
    TBatchStream mStream;
    size_t mInputCount;
    std::string mCalibrationTableName;
    const char* mInputBlobName;
    bool mReadCache{true};
    void* mDeviceInput{nullptr};
    std::vector<char> mCalibrationCache;
};

//! \class Int8EntropyCalibrator2
//!
//! \brief Implements Entropy calibrator 2.
//!  CalibrationAlgoType is kENTROPY_CALIBRATION_2.
//!
template <typename TBatchStream>
class Int8EntropyCalibrator2 : public IInt8EntropyCalibrator2
{
public:
    Int8EntropyCalibrator2(
        TBatchStream stream, int firstBatch, const char* networkName, const char* inputBlobName, bool readCache = true)
        : mImpl(stream, firstBatch, networkName, inputBlobName, readCache)
    {
    }

    int getBatchSize() const override
    {
        return mImpl.getBatchSize();
    }

    bool getBatch(void* bindings[], const char* names[], int nbBindings) override
    {
        return mImpl.getBatch(bindings, names, nbBindings);
    }

    const void* readCalibrationCache(size_t& length) override
    {
        return mImpl.readCalibrationCache(length);
    }

    void writeCalibrationCache(const void* cache, size_t length) override
    {
        mImpl.writeCalibrationCache(cache, length);
    }

private:
    EntropyCalibratorImpl<TBatchStream> mImpl;
};

  • TensorRT需要应用程序实现calibration的interface IInt8Calibrator, 其中IInt8EntropyCalibrator2是它的继承类,所以在本示例程序中通过Int8EntropyCalibrator2实现了IInt8EntropyCalibrator2,而Int8EntropyCalibrator2则是通过EntropyCalibratorImpl类实现了IInt8Calibrator需要提供的几个接口方法,包括
  • 构造函数中通过mInputCount = samplesCommon::volume(dims) * mStream.getBatchSize(); 计算每一个input batch中calibration数据个数,然后通过cudaMalloc(&mDeviceInput, mInputCount * sizeof(float)) 在GPU Device上分配保存input数据的存储空间
  • 在getBatch方法中,通过cudaMemcpy(mDeviceInput, mStream.getBatch(), mInputCount * sizeof(float), cudaMemcpyHostToDevice) 将BatchStream提供的一次 batch数据从host 传送到device端
  • 在readCalibrationCache/writeCalibrationCache方法中,将网络每一层校准后得到的Calibration阈值保存到CalibrationTable以便后续读取重复使用
bool SampleINT8::constructNetwork(SampleUniquePtr<nvinfer1::IBuilder>& builder,
    SampleUniquePtr<nvinfer1::INetworkDefinition>& network, SampleUniquePtr<nvinfer1::IBuilderConfig>& config,
    SampleUniquePtr<nvcaffeparser1::ICaffeParser>& parser, DataType dataType)
{
......

    if (mParams.dlaCore >= 0)
    {
        samplesCommon::enableDLA(builder.get(), config.get(), mParams.dlaCore);
        if (mParams.batchSize > builder->getMaxDLABatchSize())
        {
            gLogError << "Requested batch size " << mParams.batchSize << " is greater than the max DLA batch size of "
                      << builder->getMaxDLABatchSize() << ". Reducing batch size accordingly." << std::endl;
            return false;
        }
    }

    mEngine = std::shared_ptr<nvinfer1::ICudaEngine>(
        builder->buildEngineWithConfig(*network, *config), samplesCommon::InferDeleter());
    if (!mEngine)
    {
        return false;
    }

    return true;
}
  • 根据程序输入参数判断是否要enable NV DLA硬件加速
  • 创建ICudaEngine用于后续网络推理过程

 

推理(Infer)过程

bool SampleINT8::infer(std::pair<float, float>& score, int firstScoreBatch, int nbScoreBatches)
{
    float ms{0.0f};

    // Create RAII buffer manager object
    samplesCommon::BufferManager buffers(mEngine, mParams.batchSize);

    auto context = SampleUniquePtr<nvinfer1::IExecutionContext>(mEngine->createExecutionContext());
    if (!context)
    {
        return false;
    }

    MNISTBatchStream batchStream(
        mParams.batchSize, nbScoreBatches, "train-images-idx3-ubyte", "train-labels-idx1-ubyte", mParams.dataDirs);
    batchStream.skip(firstScoreBatch);

    Dims outputDims = context->getEngine().getBindingDimensions(
        context->getEngine().getBindingIndex(mParams.outputTensorNames[0].c_str()));
    int outputSize = samplesCommon::volume(outputDims);
    int top1{0}, top5{0};
    float totalTime{0.0f};
......
  • 构建BufferManager 分配Host与Device的存储空间供推理过程输入网络input数据和获取网络output数据,关于BufferManager数据结构的详细分析请参考我的博文  TensorRT sampleMNIST 详解
  • 创建推理需要的IExecutionContext对象
  • 创建Score数据集的BatchStream,用于获取Score数据集的data与label数据
  • 获取网络output Tensor的维度outputDims,计算出outputSzie的大小
bool SampleINT8::processInput(const samplesCommon::BufferManager& buffers, const float* data)
{
    // Fill data buffer
    float* hostDataBuffer = static_cast<float*>(buffers.getHostBuffer(mParams.inputTensorNames[0]));
    std::memcpy(hostDataBuffer, data, mParams.batchSize * samplesCommon::volume(mInputDims) * sizeof(float));
    return true;
}

......

bool SampleINT8::infer(std::pair<float, float>& score, int firstScoreBatch, int nbScoreBatches)
{
......

while (batchStream.next())
    {
        // Read the input data into the managed buffers
        assert(mParams.inputTensorNames.size() == 1);
        if (!processInput(buffers, batchStream.getBatch()))
        {
            return false;
        }

        // Memcpy from host input buffers to device input buffers
        buffers.copyInputToDevice();

        cudaStream_t stream;
        CHECK(cudaStreamCreate(&stream));

        // Use CUDA events to measure inference time
        cudaEvent_t start, end;
        CHECK(cudaEventCreateWithFlags(&start, cudaEventBlockingSync));
        CHECK(cudaEventCreateWithFlags(&end, cudaEventBlockingSync));
        cudaEventRecord(start, stream);

        bool status = context->enqueue(mParams.batchSize, buffers.getDeviceBindings().data(), stream, nullptr);
        if (!status)
        {
            return false;
        }

        cudaEventRecord(end, stream);
        cudaEventSynchronize(end);
        cudaEventElapsedTime(&ms, start, end);
        cudaEventDestroy(start);
        cudaEventDestroy(end);

        totalTime += ms;

        // Memcpy from device output buffers to host output buffers
        buffers.copyOutputToHost();

        CHECK(cudaStreamDestroy(stream));

        top1 += calculateScore(buffers, batchStream.getLabels(), mParams.batchSize, outputSize, 1);
        top5 += calculateScore(buffers, batchStream.getLabels(), mParams.batchSize, outputSize, 5);

        if (batchStream.getBatchesRead() % 100 == 0)
        {
            gLogInfo << "Processing next set of max 100 batches" << std::endl;
        }
    }

    int imagesRead = batchStream.getBatchesRead() * mParams.batchSize;
    score.first = float(top1) / float(imagesRead);
    score.second = float(top5) / float(imagesRead);

    gLogInfo << "Top1: " << score.first << ", Top5: " << score.second << std::endl;
    gLogInfo << "Processing " << imagesRead << " images averaged " << totalTime / imagesRead << " ms/image and "
             << totalTime / batchStream.getBatchesRead() << " ms/batch." << std::endl;

    return true;
  • while循环持续从score数据集合中按batchSize的大小读取一批input数据
  • 通过processInput函数将input数据copy到BufferManager的host 存储空间中
  • 通过buffers.copyInputToDevice(); 将BufferManager的host 存储空间数据传送到GPU device 存储空间,实现input数据的输入
  • 通过cudaStreamCreate 创建并行计算的stream对象
  • 通过cudaEventCreateWithFlags创建统计时间的start和end 对象
  • 通过context->enqueue 开始执行在score数据集上的网络推理过程
  • 通过cudaEventSynchronize同步等待cuda并行计算的推理过程结束
  • 通过cudaEventElapsedTime计算本次推理过程的时间
  • 通过buffers.copyOutputToHost(); 将推理的output从GPU device端传送到host 端
  • 通过calculateScore函数来计算本次推理过程的精度,具体过程如下
int SampleINT8::calculateScore(
    const samplesCommon::BufferManager& buffers, float* labels, int batchSize, int outputSize, int threshold)
{
    float* probs = static_cast<float*>(buffers.getHostBuffer(mParams.outputTensorNames[0]));

    int success = 0;
    for (int i = 0; i < batchSize; i++)
    {
        float *prob = probs + outputSize * i, correct = prob[(int) labels[i]];

        int better = 0;
        for (int j = 0; j < outputSize; j++)
        {
            if (prob[j] >= correct)
            {
                better++;
            }
        }
        if (better <= threshold)
        {
            success++;
        }
    }
    return success;
}
  • probs与labels指针的关系如下图所示

  • 特别解释一下for循环中的better和success的计算规则
  • calculateScore函数本质上是计算在1次Batch推理结果中排Top1/Top5 的正确结果个数,其中Top1/Top5的计算规则如下图所示

  •  最终calculateScore函数返回一个BatchSize的 input image推理后,概率排名Top1/Top5 的正确output结果个数

 

bool SampleINT8::infer(std::pair<float, float>& score, int firstScoreBatch, int nbScoreBatches)
{
......

    int imagesRead = batchStream.getBatchesRead() * mParams.batchSize;
    score.first = float(top1) / float(imagesRead);
    score.second = float(top5) / float(imagesRead);

    gLogInfo << "Top1: " << score.first << ", Top5: " << score.second << std::endl;
    gLogInfo << "Processing " << imagesRead << " images averaged " << totalTime / imagesRead << " ms/image and "
             << totalTime / batchStream.getBatchesRead() << " ms/batch." << std::endl;

    return true;
}
  • 总共向网络输入了imagesRead个image做推理,计算推理结果中Top1/Top5的正确结果个数占总输入image的比例输出到log,即如下的程序运行log
&&&& RUNNING TensorRT.sample_int8 # ./sample_int8 mnist
[I] FP32 run:400 batches of size 100 starting at 100
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Top1: 0.9904, Top5: 1
[I] Processing 40000 images averaged 0.00170236 ms/image and 0.170236 ms/batch.
[I] FP16 run:400 batches of size 100 starting at 100
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Top1: 0.9904, Top5: 1
[I] Processing 40000 images averaged 0.00128872 ms/image and 0.128872 ms/batch.

INT8 run:400 batches of size 100 starting at 100
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Processing next set of max 100 batches
[I] Top1: 0.9908, Top5: 1
[I] Processing 40000 images averaged 0.000946117 ms/image and 0.0946117 ms/batch.
&&&& PASSED TensorRT.sample_int8 # ./sample_int8 mnist

 

资源释放

int main(int argc, char** argv)
{
......

    auto isApproximatelyEqual = [](float a, float b, double tolerance) { return (std::abs(a - b) <= tolerance); };
    double fp16tolerance{0.5}, int8tolerance{1.0};

    if (scores[1].first != 0.0f && !isApproximatelyEqual(scores[0].first, scores[1].first, fp16tolerance))
    {
        gLogError << "FP32(" << scores[0].first << ") and FP16(" << scores[1].first
                  << ") Top1 accuracy differ by more than " << fp16tolerance << "." << std::endl;
        return gLogger.reportFail(sampleTest);
    }
    if (scores[2].first != 0.0f && !isApproximatelyEqual(scores[0].first, scores[2].first, int8tolerance))
    {
        gLogError << "FP32(" << scores[0].first << ") and Int8(" << scores[2].first
                  << ") Top1 accuracy differ by more than " << int8tolerance << "." << std::endl;
        return gLogger.reportFail(sampleTest);
    }
    if (scores[1].second != 0.0f && !isApproximatelyEqual(scores[0].second, scores[1].second, fp16tolerance))
    {
        gLogError << "FP32(" << scores[0].second << ") and FP16(" << scores[1].second
                  << ") Top5 accuracy differ by more than " << fp16tolerance << "." << std::endl;
        return gLogger.reportFail(sampleTest);
    }
    if (scores[2].second != 0.0f && !isApproximatelyEqual(scores[0].second, scores[2].second, int8tolerance))
    {
        gLogError << "FP32(" << scores[0].second << ") and INT8(" << scores[2].second
                  << ") Top5 accuracy differ by more than " << int8tolerance << "." << std::endl;
        return gLogger.reportFail(sampleTest);
    }

    if (!sample.teardown())
    {
        return gLogger.reportFail(sampleTest);
    }

    return gLogger.reportPass(sampleTest);
}
  • 计算不同dataType中Top1和Top5的精度差异是否符合预设的阈值xxxtolerance
  • 通过teardown释放程序的资源
  • 2
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Yolov8是一个目标检测模型,它是由深度学习框架PyTorch实现的。关于Yolov8代码详解,你可以参考引用中提供的"yolov5代码详解Yolov5代码详解.zip"文件。该文件应该包含了Yolov5的代码及其详细解释。 此外,如果你只是想使用Yolov8进行目标检测,你也可以使用引用中提供的命令方式进行安装和使用。通过该命令,你可以预测模型(yolov8n.pt)在指定的图像(source)上进行目标检测。 另外,如果你对Yolov8的分割模型感兴趣,你可以使用引用中提到的-yolov8n-seg.pt模型,该模型在COCO数据集上进行了预训练,并可以用于目标分割任务。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [Yolov5代码详解.zip](https://download.csdn.net/download/liufang_imei/87555127)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *2* [YOLOv8详解代码实战,附有效果图](https://blog.csdn.net/weixin_45277161/article/details/130200494)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] - *3* [YOLOv8详解 【网络结构+代码+实操】](https://blog.csdn.net/zyw2002/article/details/128732494)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 33.333333333333336%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值