整理 live555 rtsp ffmpeg 客户端解码流程

在 ubuntu 上(Linux)  ,  用  ffmpeg  解码 。。。

这里只讲 客户端的使用. (服务器端以及多线程未涉及)
看了一下live555发现里面的例子里只有一个openrtsp的例子有点像。

文章的标题是:

Live555+FFMPEG+ddraw实现H264码流接收,解码,显示

链接是:

smilestone 文章

文章正文:
1)H264码流接收采用的是live555,live555会将sps,pps,I帧,p帧都是单独的包过来的,在接收到Buffer,需要对它进行组成帧,live555自己支持I帧和P帧的组帧的,但是我们交给ffmpeg前,必须对在每帧之前插入00 00 00 01开始码,同时如果是I帧,必须将sps,pps,I帧同时交给ffmpeg才能解码的,所以对live555的Buffer的进行组帧;

live555的重点工作是拿到Buffer,可以参考OpenRtsp和RtspClient两个例子,OpenRtsp中有一个FileSink和H264VideoFileSink,RtspClient中有个DummySink,可以修改这个Sink,在这个Sink中进行组帧,然后调用ffmpeg解码;

class DummySink: public MediaSink {
public:
 static DummySink* createNew(UsageEnvironment& env,
  MediaSubsession& subsession, // identifies the kind of data that's being received
  char const* streamId = NULL); // identifies the stream itself (optional)


private:
 DummySink(UsageEnvironment& env, MediaSubsession& subsession, char const* streamId);
 // called only by "createNew()"
 virtual ~DummySink();


 static void afterGettingFrame(void* clientData, unsigned frameSize,
  unsigned numTruncatedBytes,
 struct timeval presentationTime,
  unsigned durationInMicroseconds);


 void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
 struct timeval presentationTime, unsigned durationInMicroseconds);


private:
 // redefined virtual functions:
 virtual Boolean continuePlaying();


private:
 u_int8_t* fReceiveBuffer;
 MediaSubsession& fSubsession;
 char* fStreamId;
};

//以上 是 live555 的 源码 ..

2)采用ffmpeg进行解码;这个没有什么可说的,我前面的例子里面有;
我们可以改写:


void DummySink::afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned /*durationInMicroseconds*/) {

 unsigned char const start_code[4] = {0x00, 0x00, 0x00, 0x01};
//.....   ......   .......  .... 
//为保存Buff的缓冲区,在这个地方将Buff调用给FFMPEG

 int imageWidth=0;
 int imageHeight=0;
 if (H264Status==H264STATUS_IFRAME ||H264Status==H264STATUS_PFRAME)
 {
//封装H264解码函数;
  bool bRet=H264DecodeClass.H264DecodeProcess((unsigned char*)pH264ReceiveBuff,frameSize,(unsigned char *)DecodeBuff,imageWidth,imageHeight);

  if (bRet&&imageWidth>0&&imageHeight>0)
  {
    TRACE("receive a frame,frameSize=%d\n",frameSize);
//这里调用DDRAW显示图像;

    } 
 }

 // Then continue, to request the next frame of data:
 continuePlaying();
}

3)
ddraw yuv420p直接显示;首先创建2个表面,主表面和离屏表面;将yuv420p的数据copy到离屏表面,然后blt到主表面进行绘制;这个在我的博客里面也有讲到,
有需要的朋友可以参考前面的ddraw yuv视频显示的文章;

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

2.

博文地址:live555实现ffmpeg解码H264的rtsp流

博客正文:
由于需要实现一个解码H264的rtsp流的web客户端。我首先想到的是live555+ffmpeg。live555用于接收rtsp流,ffmpeg用于解码H264用于显示。看了一下live555发现里面的例子里只有一个openrtsp的例子有点想象,但是那个只是接收rtsp流存在一个文件中。我先尝试写了一个ffmpeg解码H264文件的程序,调试通过。现在只要把live555的例子改一下就可以了,把两个程序联合起来就可以了。这里主要的关键点是找到openrtsp写入文件的地方,只需将这个地方的数据获取到解码显示就可以了。


由于项目忙,也只能抽出时间来记录一下。
main函数在playCommon.cpp。main()的流程比较简单,跟服务端差别不大:建立任务计划对象--建立环境对象--处理用户输入的参数(RTSP地址)--创建RTSPClient实例--发出第一个RTSP请求(可能是OPTIONS也可能是DESCRIBE)--进入Loop。

我们主要来看看创建RTPSource在函数createSourceObjects()中,看一下:

Boolean MediaSubsession::createSourceObjects(int useSpecialRTPoffset) {  
  do {  
    // First, check "fProtocolName"  
    if (strcmp(fProtocolName, "UDP") == 0) {  
      // A UDP-packetized stream (*not* a RTP stream)  
      fReadSource = BasicUDPSource::createNew(env(), fRTPSocket);  
      fRTPSource = NULL; // Note!  
        
      if (strcmp(fCodecName, "MP2T") == 0) { // MPEG-2 Transport Stream  
    fReadSource = MPEG2TransportStreamFramer::createNew(env(), fReadSource);  
    // this sets "durationInMicroseconds" correctly, based on the PCR values  
      }  
    } else {  
      // Check "fCodecName" against the set of codecs that we support,  
      // and create our RTP source accordingly  
      // (Later make this code more efficient, as this set grows #####)  
      // (Also, add more fmts that can be implemented by SimpleRTPSource#####)  
      Boolean createSimpleRTPSource = False; // by default; can be changed below  
      Boolean doNormalMBitRule = False; // default behavior if "createSimpleRTPSource" is True  
      if (strcmp(fCodecName, "QCELP") == 0) { // QCELP audio  
    fReadSource =  
      QCELPAudioRTPSource::createNew(env(), fRTPSocket, fRTPSource,  
                     fRTPPayloadFormat,  
                     fRTPTimestampFrequency);  
    // Note that fReadSource will differ from fRTPSource in this case  
      } else if (strcmp(fCodecName, "AMR") == 0) { // AMR audio (narrowband)  
    fReadSource =  
      AMRAudioRTPSource::createNew(env(), fRTPSocket, fRTPSource,  
                       fRTPPayloadFormat, 0 /*isWideband*/,  
                       fNumChannels, fOctetalign, fInterleaving,  
                       fRobustsorting, fCRC);  
    // Note that fReadSource will differ from fRTPSource in this case  
      } else if (strcmp(fCodecName, "AMR-WB") == 0) { // AMR audio (wideband)  
    fReadSource =  
      AMRAudioRTPSource::createNew(env(), fRTPSocket, fRTPSource,  
                       fRTPPayloadFormat, 1 /*isWideband*/,  
                       fNumChannels, fOctetalign, fInterleaving,  
                       fRobustsorting, fCRC);  
    // Note that fReadSource will differ from fRTPSource in this case  
      } else if (strcmp(fCodecName, "MPA") == 0) { // MPEG-1 or 2 audio  
    fReadSource = fRTPSource  
      = MPEG1or2AudioRTPSource::createNew(env(), fRTPSocket,  
                          fRTPPayloadFormat,  
                          fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "MPA-ROBUST") == 0) { // robust MP3 audio  
    fReadSource = fRTPSource  
      = MP3ADURTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat,  
                       fRTPTimestampFrequency);  
    if (fRTPSource == NULL) break;  
      
    if (!fReceiveRawMP3ADUs) {  
      // Add a filter that deinterleaves the ADUs after depacketizing them:  
      MP3ADUdeinterleaver* deinterleaver  
        = MP3ADUdeinterleaver::createNew(env(), fRTPSource);  
      if (deinterleaver == NULL) break;  
      
      // Add another filter that converts these ADUs to MP3 frames:  
      fReadSource = MP3FromADUSource::createNew(env(), deinterleaver);  
    }  
      } else if (strcmp(fCodecName, "X-MP3-DRAFT-00") == 0) {  
    // a non-standard variant of "MPA-ROBUST" used by RealNetworks  
    // (one 'ADU'ized MP3 frame per packet; no headers)  
    fRTPSource  
      = SimpleRTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat,  
                       fRTPTimestampFrequency,  
                       "audio/MPA-ROBUST" /*hack*/);  
    if (fRTPSource == NULL) break;  
      
    // Add a filter that converts these ADUs to MP3 frames:  
    fReadSource = MP3FromADUSource::createNew(env(), fRTPSource,  
                          False /*no ADU header*/);  
      } else if (strcmp(fCodecName, "MP4A-LATM") == 0) { // MPEG-4 LATM audio  
    fReadSource = fRTPSource  
      = MPEG4LATMAudioRTPSource::createNew(env(), fRTPSocket,  
                           fRTPPayloadFormat,  
                           fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "VORBIS") == 0) { // Vorbis audio  
    fReadSource = fRTPSource  
      = VorbisAudioRTPSource::createNew(env(), fRTPSocket,  
                        fRTPPayloadFormat,  
                        fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "VP8") == 0) { // VP8 video  
    fReadSource = fRTPSource  
      = VP8VideoRTPSource::createNew(env(), fRTPSocket,  
                     fRTPPayloadFormat,  
                     fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "AC3") == 0 || strcmp(fCodecName, "EAC3") == 0) { // AC3 audio  
    fReadSource = fRTPSource  
      = AC3AudioRTPSource::createNew(env(), fRTPSocket,  
                     fRTPPayloadFormat,  
                     fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "MP4V-ES") == 0) { // MPEG-4 Elementary Stream video  
    fReadSource = fRTPSource  
      = MPEG4ESVideoRTPSource::createNew(env(), fRTPSocket,  
                         fRTPPayloadFormat,  
                         fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "MPEG4-GENERIC") == 0) {  
    fReadSource = fRTPSource  
      = MPEG4GenericRTPSource::createNew(env(), fRTPSocket,  
                         fRTPPayloadFormat,  
                         fRTPTimestampFrequency,  
                         fMediumName, fMode,  
                         fSizelength, fIndexlength,  
                         fIndexdeltalength);  
      } else if (strcmp(fCodecName, "MPV") == 0) { // MPEG-1 or 2 video  
    fReadSource = fRTPSource  
      = MPEG1or2VideoRTPSource::createNew(env(), fRTPSocket,  
                          fRTPPayloadFormat,  
                          fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "MP2T") == 0) { // MPEG-2 Transport Stream  
    fRTPSource = SimpleRTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat,  
                        fRTPTimestampFrequency, "video/MP2T",  
                        0, False);  
    fReadSource = MPEG2TransportStreamFramer::createNew(env(), fRTPSource);  
    // this sets "durationInMicroseconds" correctly, based on the PCR values  
      } else if (strcmp(fCodecName, "H261") == 0) { // H.261  
    fReadSource = fRTPSource  
      = H261VideoRTPSource::createNew(env(), fRTPSocket,  
                      fRTPPayloadFormat,  
                      fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "H263-1998") == 0 ||  
         strcmp(fCodecName, "H263-2000") == 0) { // H.263+  
    fReadSource = fRTPSource  
      = H263plusVideoRTPSource::createNew(env(), fRTPSocket,  
                          fRTPPayloadFormat,  
                          fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "H264") == 0) {  
    fReadSource = fRTPSource  
      = H264VideoRTPSource::createNew(env(), fRTPSocket,  
                      fRTPPayloadFormat,  
                      fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "DV") == 0) {  
    fReadSource = fRTPSource  
      = DVVideoRTPSource::createNew(env(), fRTPSocket,  
                    fRTPPayloadFormat,  
                    fRTPTimestampFrequency);  
      } else if (strcmp(fCodecName, "JPEG") == 0) { // motion JPEG  
    fReadSource = fRTPSource  
      = JPEGVideoRTPSource::createNew(env(), fRTPSocket,  
                      fRTPPayloadFormat,  
                      fRTPTimestampFrequency,  
                      videoWidth(),  
                      videoHeight());  
      } else if (strcmp(fCodecName, "X-QT") == 0  
         || strcmp(fCodecName, "X-QUICKTIME") == 0) {  
    // Generic QuickTime streams, as defined in  
    // <http://developer.apple.com/quicktime/icefloe/dispatch026.html>  
    char* mimeType  
      = new char[strlen(mediumName()) + strlen(codecName()) + 2] ;  
    sprintf(mimeType, "%s/%s", mediumName(), codecName());  
    fReadSource = fRTPSource  
      = QuickTimeGenericRTPSource::createNew(env(), fRTPSocket,  
                         fRTPPayloadFormat,  
                         fRTPTimestampFrequency,  
                         mimeType);  
    delete[] mimeType;  
      } else if (  strcmp(fCodecName, "PCMU") == 0 // PCM u-law audio  
           || strcmp(fCodecName, "GSM") == 0 // GSM audio  
           || strcmp(fCodecName, "DVI4") == 0 // DVI4 (IMA ADPCM) audio  
           || strcmp(fCodecName, "PCMA") == 0 // PCM a-law audio  
           || strcmp(fCodecName, "MP1S") == 0 // MPEG-1 System Stream  
           || strcmp(fCodecName, "MP2P") == 0 // MPEG-2 Program Stream  
           || strcmp(fCodecName, "L8") == 0 // 8-bit linear audio  
           || strcmp(fCodecName, "L16") == 0 // 16-bit linear audio  
           || strcmp(fCodecName, "L20") == 0 // 20-bit linear audio (RFC 3190)  
           || strcmp(fCodecName, "L24") == 0 // 24-bit linear audio (RFC 3190)  
           || strcmp(fCodecName, "G726-16") == 0 // G.726, 16 kbps  
           || strcmp(fCodecName, "G726-24") == 0 // G.726, 24 kbps  
           || strcmp(fCodecName, "G726-32") == 0 // G.726, 32 kbps  
           || strcmp(fCodecName, "G726-40") == 0 // G.726, 40 kbps  
           || strcmp(fCodecName, "SPEEX") == 0 // SPEEX audio  
           || strcmp(fCodecName, "T140") == 0 // T.140 text (RFC 4103)  
           || strcmp(fCodecName, "DAT12") == 0 // 12-bit nonlinear audio (RFC 3190)  
           ) {  
    createSimpleRTPSource = True;  
    useSpecialRTPoffset = 0;  
      } else if (useSpecialRTPoffset >= 0) {  
    // We don't know this RTP payload format, but try to receive  
    // it using a 'SimpleRTPSource' with the specified header offset:  
    createSimpleRTPSource = True;  
      } else {  
    env().setResultMsg("RTP payload format unknown or not supported");  
    break;  
      }  
        
      if (createSimpleRTPSource) {  
    char* mimeType  
      = new char[strlen(mediumName()) + strlen(codecName()) + 2] ;  
    sprintf(mimeType, "%s/%s", mediumName(), codecName());  
    fReadSource = fRTPSource  
      = SimpleRTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat,  
                       fRTPTimestampFrequency, mimeType,  
                       (unsigned)useSpecialRTPoffset,  
                       doNormalMBitRule);  
    delete[] mimeType;  
      }  
    }  
  
    return True;  
  } while (0);  
  
  return False; // an error occurred  
}  
-------------------------------------------------------------------------------------------------------------
3.  固本培元的专栏 (放牛娃不吃草)

该博主写的博客 阅读量 已经过万 .
文章标题:
Live555接收h264使用ffmpeg解码为YUV420

博客地址:

固本培元

部分内容:
1.0
Live555客户端
          在编译完成live555之后会产生很多例程。其中便有客户端的改写例程。
本文使用了testRTSPClient.cpp 例程进行改写。
         live555的官方文档中有记录:点击打开链接

2.0
live555保存h264文件:
            live555在传输h264流时省略了起始码,若需要存储h264码流的朋友并需要能使用vlc播放加入起始码即可。
            起始码:0x00 0x00 0x00 0x01
            (注意:0x01  在高地址)

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

4.博客标题:
Live555 + h264 + ffmpeg 客户端解码 笔记

客户端 使用 ffmpeg 解码  mp4 文件中的 h264 帧 并用 SDL2.0 播放
    1,让我们了解一下什么是 pps, sps , 链接 -
   打开链接吧
这文章分析的非常专业的--哈哈

使用RTP传输H264的时候,需要用到sdp协议描述,其中有两项:Sequence Parameter Sets (SPS) 和Picture Parameter Set (PPS)需要用到,

那么这两项从哪里获取呢?答案是从H264码流中获取.在H264码流中,都是以"0x00 0x00 0x01"或者"0x00 0x00 0x00 0x01"为开始码的,找到开始码之后,

使用开始码之后的第一个字节的低5位判断是否为7(sps)或者8(pps), 及data[4] & 0x1f == 7 || data[4] & 0x1f == 8.

然后对获取的nal去掉开始码之后进行base64编码,

得到的信息就可以用于sdp.sps和pps需要用逗号分隔开来.

   2.好了,接下来我说一下  555 客户端是怎么获取 sps, pps 并解码的。
The "testRTSPClient" demo application receives each (video and/or audio) frame into a memory buffer, but does not do anything with the frame data. 

You can, however, use this code as a model for a 'media player' application that decodes and renders these frames.

 Note, in particular, the "DummySink" class that the "testRTSPClient" demo application uses - and the (non-static) "DummySink::afterGettingFrame()" function. 

When this function is called, a complete 'frame' (for H.264 or H.265, this will be a "NAL unit") will have already been delivered into "fReceiveBuffer". 

Note that our "DummySink" implementation doesn't actually do anything with this data; that's why it's called a 'dummy' sink. 

If you want to decode (or otherwise process) these frames, you would replace "DummySink" with your own "MediaSink" subclass.

 Its "afterGettingFrame()" function would pass the data (at "fReceiveBuffer", of length "frameSize") to a decoder.

 (A decoder would also use the "presentationTime" timestamp to properly time the rendering of each frame, and to synchronize audio and video.) 

链接: 请打开 这个链接

上面已经说了, 在客户端解码的时候需要 do something before decode. 

1.  调用 MediaSubsession::fmpt_spropparameterstes() 获取到  sps, pps 的 base64 编码;

2. 调用 SPropRecord* parseSPropParameterSets(char const* sPropParameterSetsStr, unsigned& numSPropRecords); 这个不是类的成员函数哦。

调用 parseSPropParameterSets(... ) 会返回一个 SPropRecord* 类型的变量。

很肯定的告诉你, 返回的其实是一个 数组或者是一块内存,元素类型就是 SPropRecord 类型。

在我的程序里面经过测试,返回的 数组 长度为2, 第一个元素 为 sps, 第二个元素为 sps。 

源码:

SPropRecord* parseSPropParameterSets(char const* sPropParameterSetsStr,
                                     // result parameter:
                                     unsigned& numSPropRecords) {
  // Make a copy of the input string, so we can replace the commas with '\0's:
  char* inStr = strDup(sPropParameterSetsStr);
  if (inStr == NULL) {
    numSPropRecords = 0;
    return NULL;
  }


  // Count the number of commas (and thus the number of parameter sets):
  numSPropRecords = 1;
  char* s;
  for (s = inStr; *s != '\0'; ++s) {
    if (*s == ',') {
      ++numSPropRecords;
      *s = '\0';
    }
  }


  // Allocate and fill in the result array:
  SPropRecord* resultArray = new SPropRecord[numSPropRecords]; //****** 看到 这里了 把 *******/
  s = inStr;
  for (unsigned i = 0; i < numSPropRecords; ++i) {
    resultArray[i].sPropBytes = base64Decode(s, resultArray[i].sPropLength);
    s += strlen(s) + 1;
  }


  delete[] inStr;
  return resultArray;
}


接下来我们继续看, 这部分代码是客户端的 -

void DummySink::afterGettingFrame1(unsigned frameSize, unsigned numTruncatedBytes,
 struct timeval presentationTime, unsigned /*durationInMicroseconds*/)
{
unsigned int Num = 0;
unsigned int &SPropRecords = Num;


SPropRecord *p_record = parseSPropParameterSets(fSubsession.fmtp_spropparametersets(), SPropRecords);

SPropRecord &sps = p_record[0];
SPropRecord &pps = p_record[1]; 

m_player->setSDPInfo(sps.sPropBytes, sps.sPropLength, pps.sPropBytes, pps.sPropLength);// 传递 sps, pps 给播放器初始化解码器

m_player->renderOneFrame(frameSize); // 给播放器发信号,一帧就绪 准备渲染
 
// Then continue, to request the next frame of data:
  continuePlaying();
}


不知道,看到最后明白没有。

还有这个博主写的,关于流媒体 开发的 博客访问量也不少。 牛 搞。。  哈哈。必须贴上他。的 地址

点击 牛搞 , 打开链接 吧

----------------------------------------------------------------------------------------------------------------------------------------------------------------------

第二次修改:(这个是,将h264流保存为文件的解决方案)

今天再改下。

在liveMedia /include/h264videofilesink.hh (这个肯定是在live555下的了)

class H264VideoFileSink: public H264or5VideoFileSink {
public:
  static H264VideoFileSink* createNew(UsageEnvironment& env, char const* fileName,
				      char const* sPropParameterSetsStr = NULL,
      // "sPropParameterSetsStr" is an optional 'SDP format' string
      // (comma-separated Base64-encoded) representing SPS and/or PPS NAL-units
      // to prepend to the output
				      unsigned bufferSize = 100000,
				      Boolean oneFilePerFrame = False);
      // See "FileSink.hh" for a description of these parameters.

protected:
  H264VideoFileSink(UsageEnvironment& env, FILE* fid,
		    char const* sPropParameterSetsStr,
		    unsigned bufferSize, char const* perFrameFileNamePrefix);
      // called only by createNew()
  virtual ~H264VideoFileSink();
};

也就是说,一般的 要看视频 画面 ,.h264  对吧。


所以。 

if(0 == strncmp(scs.subsession->mediumName(), "video", 5))
{
    do{
	if(0 == strcmp(scs.subsession->codecName(), "H264"))
	{
		scs.subsession->sink =H264VideoFileSink::createNew(env,filename, rtspClient->rtsp_id, scs.subsession->fmtp_spropparametersets(),buf_size);  
	}

	}while(0);

}


如果,到这里,还是不明白,我也没有办法,

不知道,跟我在  这里 DummySink::afterGettingFrame 函数里面 直接 修改  fReceiveBuffer  里面的数据。

我的修改方式就是在 这个数据前面 加上 {0,0,0,1};

然后就是传给 解码器 解码。。我的方式比较霸蛮,有时候根本没图像数据。

欢迎给我留言什么的,一起多交流。

----------------------------------------------------------------------------------------------------------------------------------------------------------------

第三次修改:(日期:2017-0926)

这个链接,给出的说明是,你要用ffmpeg 解码 live555 出来的h264 所谓的裸流。(不想保存文件,只是单纯的解码)

其实是 只要设置, 也就是ffmpeg 初始化解码器的时候。

m_decoderContext->extradata =  (uint8_t*)av_malloc(100 + AV_INPUT_BUFFER_PADDING_SIZE);
int extraDataSize = 0;
for (int i = 0; i < numSPropRecords; i++)
{
    memcpy(m_decoderContext->extradata + extraDataSize, startCode, 4);
    extraDataSize += 4;
    memcpy(m_decoderContext->extradata + extraDataSize, sPropRecords[i].sPropBytes, sPropRecords[i].sPropLength);
    extraDataSize += sPropRecords[i].sPropLength;
}
m_decoderContext->extradata_size = extraDataSize;
下面 给出  具体 链接,希望 对你有用: 

请点击这个点击打开链接,学习别人怎么设置

假若这链接,打不开,请fan墙。

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Teleger

你的支持是我前进的方向

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值