使用NvEncoder编码为H264数据时的输出

NVENCSTATUS NVENCAPI NvEncEncodePicture(void * encoder,
  NV_ENC_PIC_PARAMSencodePicParams 
 )  

Submit an input picture for encoding.

This function is used to submit an input picture buffer for encoding. The encoding parameters are passed using *encodePicParams which is a pointer to the _NV_ENC_PIC_PARAMS structure.

If the client has set NV_ENC_INITIALIZE_PARAMS::enablePTD to 0, then it must send a valid value for the following fields.

Asynchronous Encoding
If the client has enabled asynchronous mode of encoding by setting NV_ENC_INITIALIZE_PARAMS::enableEncodeAsync to 1 in the NvEncInitializeEncoder() API ,then the client must send a valid NV_ENC_PIC_PARAMS::completionEvent. Incase of asynchronous mode of operation, client can queue the NvEncEncodePicture() API commands from the main thread and then queue output buffers to be processed to a secondary worker thread. Before the locking the output buffers in the secondary thread , the client must wait on NV_ENC_PIC_PARAMS::completionEvent it has queued in NvEncEncodePicture() API call. The client must always process completion event and the output buffer in the same order in which they have been submitted for encoding. The NvEncodeAPI interface is responsible for any re-ordering required for B frames and will always ensure that encoded bitstream data is written in the same order in which output buffer is submitted.
  The below example shows how  asynchronous encoding in case of 1 B frames
  ------------------------------------------------------------------------
  Suppose the client allocated 4 input buffers(I1,I2..), 4 output buffers(O1,O2..) 
  and 4 completion events(E1, E2, ...). The NvEncodeAPI interface will need to 
  keep a copy of the input buffers for re-ordering and it allocates following 
  internal buffers (NvI1, NvI2...). These internal buffers are managed by NvEncodeAPI
  and the client is not responsible for the allocating or freeing the memory of 
  the internal buffers.

  a) The client main thread will queue the following encode frame calls. 
  Note the picture type is unknown to the client, the decision is being taken by 
  NvEncodeAPI interface. The client should pass ::_NV_ENC_PIC_PARAMS parameter  
  consisting of allocated input buffer, output buffer and output events in successive 
  ::NvEncEncodePicture() API calls along with other required encode picture params.
  For example:
  1st EncodePicture parameters - (I1, O1, E1)
  2nd EncodePicture parameters - (I2, O2, E2)
  3rd EncodePicture parameters - (I3, O3, E3)

  b) NvEncodeAPI SW will receive the following encode Commands from the client. 
  The left side shows input from client in the form (Input buffer, Output Buffer, 
  Output Event). The right hand side shows a possible picture type decision take by
  the NvEncodeAPI interface.
  (I1, O1, E1)    ---P1 Frame
  (I2, O2, E2)    ---B2 Frame
  (I3, O3, E3)    ---P3 Frame

  c) NvEncodeAPI interface will make a copy of the input buffers to its internal  
   buffersfor re-ordering. These copies are done as part of nvEncEncodePicture  
   function call from the client and NvEncodeAPI interface is responsible for  
   synchronization of copy operation with the actual encoding operation.
   I1 --> NvI1  
   I2 --> NvI2 
   I3 --> NvI3

  d) After returning from ::NvEncEncodePicture() call , the client must queue the output
   bitstream  processing work to the secondary thread. The output bitstream processing
   for asynchronous mode consist of first waiting on completion event(E1, E2..)
   and then locking the output bitstream buffer(O1, O2..) for reading the encoded
   data. The work queued to the secondary thread by the client is in the following order
   (I1, O1, E1)
   (I2, O2, E2)
   (I3, O3, E3)
   Note they are in the same order in which client calls ::NvEncEncodePicture() API 
   in \p step a).

  e) NvEncodeAPI interface  will do the re-ordering such that Encoder HW will receive 
  the following encode commands:
  (NvI1, O1, E1)   ---P1 Frame
  (NvI3, O2, E2)   ---P3 Frame
  (NvI2, O3, E3)   ---B2 frame

  f) After the encoding operations are completed, the events will be signalled 
  by NvEncodeAPI interface in the following order :
  (O1, E1) ---P1 Frame ,output bitstream copied to O1 and event E1 signalled.
  (O2, E2) ---P3 Frame ,output bitstream copied to O2 and event E2 signalled.
  (O3, E3) ---B2 Frame ,output bitstream copied to O3 and event E3 signalled.

  g) The client must lock the bitstream data using ::NvEncLockBitstream() API in 
   the order O1,O2,O3  to read the encoded data, after waiting for the events
   to be signalled in the same order i.e E1, E2 and E3.The output processing is
   done in the secondary thread in the following order:
   Waits on E1, copies encoded bitstream from O1
   Waits on E2, copies encoded bitstream from O2
   Waits on E3, copies encoded bitstream from O3

  -Note the client will receive the events signalling and output buffer in the 
   same order in which they have submitted for encoding.
  -Note the LockBitstream will have picture type field which will notify the 
   output picture type to the clients.
  -Note the input, output buffer and the output completion event are free to be 
   reused once NvEncodeAPI interfaced has signalled the event and the client has
   copied the data from the output buffer.
Synchronous Encoding
The client can enable synchronous mode of encoding by setting NV_ENC_INITIALIZE_PARAMS::enableEncodeAsync to 0 in NvEncInitializeEncoder() API. The NvEncodeAPI interface may return NV_ENC_ERR_NEED_MORE_INPUT error code for some NvEncEncodePicture() API calls when NV_ENC_INITIALIZE_PARAMS::enablePTD is set to 1, but the client must not treat it as a fatal error. The NvEncodeAPI interface might not be able to submit an input picture buffer for encoding immediately due to re-ordering for B frames. The NvEncodeAPI interface cannot submit the input picture which is decided to be encoded as B frame as it waits for backward reference from temporally subsequent frames. This input picture is buffered internally and waits for more input picture to arrive. The client must not call NvEncLockBitstream() API on the output buffers whose NvEncEncodePicture() API returns NV_ENC_ERR_NEED_MORE_INPUT. The client must wait for the NvEncodeAPI interface to return NV_ENC_SUCCESS before locking the output bitstreams to read the encoded bitstream data. The following example explains the scenario with synchronous encoding with 2 B frames.
 The below example shows how  synchronous encoding works in case of 1 B frames
 -----------------------------------------------------------------------------
 Suppose the client allocated 4 input buffers(I1,I2..), 4 output buffers(O1,O2..) 
 and 4 completion events(E1, E2, ...). The NvEncodeAPI interface will need to 
 keep a copy of the input buffers for re-ordering and it allocates following 
 internal buffers (NvI1, NvI2...). These internal buffers are managed by NvEncodeAPI
 and the client is not responsible for the allocating or freeing the memory of 
 the internal buffers.

 The client calls ::NvEncEncodePicture() API with input buffer I1 and output buffer O1.
 The NvEncodeAPI decides to encode I1 as P frame and submits it to encoder
 HW and returns ::NV_ENC_SUCCESS. 
 The client can now read the encoded data by locking the output O1 by calling
 NvEncLockBitstream API.

 The client calls ::NvEncEncodePicture() API with input buffer I2 and output buffer O2.
 The NvEncodeAPI decides to encode I2 as B frame and buffers I2 by copying it
 to internal buffer and returns ::NV_ENC_ERR_NEED_MORE_INPUT.
 The error is not fatal and it notifies client that it cannot read the encoded 
 data by locking the output O2 by calling ::NvEncLockBitstream() API without submitting
 more work to the NvEncodeAPI interface.
  
 The client calls ::NvEncEncodePicture() with input buffer I3 and output buffer O3.
 The NvEncodeAPI decides to encode I3 as P frame and it first submits I3 for 
 encoding which will be used as backward reference frame for I2.
 The NvEncodeAPI then submits I2 for encoding and returns ::NV_ENC_SUCESS. Both
 the submission are part of the same ::NvEncEncodePicture() function call.
 The client can now read the encoded data for both the frames by locking the output
 O2 followed by  O3 ,by calling ::NvEncLockBitstream() API.

 The client must always lock the output in the same order in which it has submitted
 to receive the encoded bitstream in correct encoding order.
Parameters:
[in]encoderPointer to the NvEncodeAPI interface.
[in,out]encodePicParamsPointer to the _NV_ENC_PIC_PARAMS structure.

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用CUDA将摄像头读取的数据编码成H264流的示例代码,使用了NVIDIA Video Codec SDK和OpenCV库: ```cpp #include <iostream> #include <opencv2/opencv.hpp> #include "NvEncoder/NvEncoderCuda.h" using namespace std; using namespace cv; int main(int argc, char* argv[]) { // 从摄像头读取数据 VideoCapture cap(0); if (!cap.isOpened()) { cout << "Failed to open camera!" << endl; return -1; } // 设置编码器参数 int nWidth = 640, nHeight = 480; int nBitrate = 1000000; int nFps = 30; int nGopSize = 30; int nMaxConcurrentSessions = 1; int nCodec = NV_ENC_H264; std::string sPreset = "hq"; std::string sProfile = "high"; NvEncoderInitParam encodeParams = { 0 }; encodeParams.width = nWidth; encodeParams.height = nHeight; encodeParams.bitrate = nBitrate; encodeParams.fps = nFps; encodeParams.gopSize = nGopSize; encodeParams.codec = nCodec; encodeParams.preset = const_cast<char*>(sPreset.c_str()); encodeParams.profile = const_cast<char*>(sProfile.c_str()); encodeParams.maxConcurrentSessions = nMaxConcurrentSessions; NvEncoderCuda enc(encodeParams); // 分配编码器缓冲区 int nFrameSize = enc.GetFrameSize(); uint8_t* pFrame = new uint8_t[nFrameSize]; uint8_t* pBitstream = new uint8_t[nFrameSize]; // 编码输出h264流 Mat frame; while (true) { // 读取一帧图像 cap >> frame; if (frame.empty()) { break; } // 将帧数据复制到CUDA缓冲区 uint8_t* dpFrame = NULL; int nPitch = 0; enc.GetDeviceFrameBuffer(&dpFrame, &nPitch); cudaMemcpy2D(dpFrame, nPitch, frame.data, frame.step, nWidth * 3, nHeight, cudaMemcpyHostToDevice); // 编码一帧图像 int nBytes = 0; enc.EncodeFrame(pFrame, &nBytes, dpFrame); // 将编码后的数据复制回主机内存 cudaMemcpy(pBitstream, enc.GetBitstreamBuffer(), nBytes, cudaMemcpyDeviceToHost); // 输出h264流 fwrite(pBitstream, sizeof(uint8_t), nBytes, stdout); } // 释放资源 enc.DestroyEncoder(); delete[] pFrame; delete[] pBitstream; return 0; } ``` 在编译需要链接NVIDIA Video Codec SDK和OpenCV库。执行程序后,它将从摄像头读取数据并将其编码为H264流输出到标准输出。你可以将输出重定向到文件中,例如: ``` ./encode | ffmpeg -i - output.mp4 ``` 这将从标准输入读取H264流,并将其转换为MP4文件。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值