java red5 h.264,Javacv: Decoding H.264 “live” stream coming from red5 server on android device

Atlast... finally got to working after lots of RnD.

What i am missing is alalyze the video frame structure. Video is made up of "I" , "P" frames.. "I" frame is information frame, which stores the information about next subsequent frames. "P" frame is picture frame, which holds actual video frame...

So i need to decode the "P" frames w.r.t information in "I" frame..

So the final code is something as follows

public IplImage decodeFromVideo(byte[] data, long timeStamp) {

avcodec.av_init_packet(reveivedVideoPacket); // Empty AVPacket

/*

* Determine if the frame is a Data Frame or Key. IFrame 1 = PFrame 0 = Key

* Frame

*/

byte frameFlag = data[1];

byte[] subData = Arrays.copyOfRange(data, 5, data.length);

BytePointer videoData = new BytePointer(subData);

if (frameFlag == 0) {

avcodec.AVCodec codec = avcodec

.avcodec_find_decoder(avcodec.AV_CODEC_ID_H264);

if (codec != null) {

videoCodecContext = null;

videoCodecContext = avcodec.avcodec_alloc_context3(codec);

videoCodecContext.width(320);

videoCodecContext.height(240);

videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P);

videoCodecContext.codec_type(avutil.AVMEDIA_TYPE_VIDEO);

videoCodecContext.extradata(videoData);

videoCodecContext.extradata_size(videoData.capacity());

videoCodecContext.flags2(videoCodecContext.flags2()

| avcodec.CODEC_FLAG2_CHUNKS);

avcodec.avcodec_open2(videoCodecContext, codec,

(PointerPointer) null);

if ((videoCodecContext.time_base().num() > 1000)

&& (videoCodecContext.time_base().den() == 1)) {

videoCodecContext.time_base().den(1000);

}

} else {

Log.e("test", "Codec could not be opened");

}

}

if ((decodedPicture = avcodec.avcodec_alloc_frame()) != null) {

if ((processedPicture = avcodec.avcodec_alloc_frame()) != null) {

int width = getImageWidth() > 0 ? getImageWidth()

: videoCodecContext.width();

int height = getImageHeight() > 0 ? getImageHeight()

: videoCodecContext.height();

switch (imageMode) {

case COLOR:

case GRAY:

int fmt = 3;

int size = avcodec.avpicture_get_size(fmt, width, height);

processPictureBuffer = new BytePointer(

avutil.av_malloc(size));

avcodec.avpicture_fill(new AVPicture(processedPicture),

processPictureBuffer, fmt, width, height);

returnImageFrame = opencv_core.IplImage.createHeader(320,

240, 8, 1);

break;

case RAW:

processPictureBuffer = null;

returnImageFrame = opencv_core.IplImage.createHeader(320,

240, 8, 1);

break;

default:

Log.d("showit",

"At default of swith case 1.$SwitchMap$com$googlecode$javacv$FrameGrabber$ImageMode[ imageMode.ordinal()]");

}

reveivedVideoPacket.data(videoData);

reveivedVideoPacket.size(videoData.capacity());

reveivedVideoPacket.pts(timeStamp);

videoCodecContext.pix_fmt(avutil.AV_PIX_FMT_YUV420P);

decodedFrameLength = avcodec.avcodec_decode_video2(videoCodecContext,

decodedPicture, isVideoDecoded, reveivedVideoPacket);

if ((decodedFrameLength >= 0) && (isVideoDecoded[0] != 0)) {

.... Process image same as javacv .....

}

Hope it wil help others..

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
这是一个比较复杂的任务,需要使用多个工具和库来完成。以下是一些大致的步骤: 1. 安装ffmpeg和JavaCV库。这些工具可以通过Maven或手动下载安装。 2. 使用JavaCV库中的FFmpegFrameGrabber类读取输入的图片或视频文件。例如: ``` FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(inputFile); grabber.start(); ``` 3. 使用JavaCV库中的FFmpegFrameRecorder类创建输出的h.264编码流。例如: ``` FFmpegFrameRecorder recorder = new FFmpegFrameRecorder(outputUrl, width, height); recorder.setVideoCodec(avcodec.AV_CODEC_ID_H264); recorder.setFormat("h264"); recorder.setPixelFormat(avutil.AV_PIX_FMT_YUV420P); recorder.start(); ``` 4. 循环读取输入文件中的帧,并将其写入输出流中。例如: ``` Frame frame; while ((frame = grabber.grab()) != null) { recorder.record(frame); } ``` 5. 使用JavaCV库中的FFmpegFrameRecorder类停止输出流并释放资源。例如: ``` recorder.stop(); recorder.release(); ``` 6. 使用Java中的Socket类连接到指定的IP地址和端口,并将输出流发送到该套接字。例如: ``` Socket socket = new Socket(ipAddress, port); OutputStream outputStream = socket.getOutputStream(); InputStream inputStream = new BufferedInputStream(new FileInputStream(outputFile)); byte[] buffer = new byte[1024]; int bytesRead; while ((bytesRead = inputStream.read(buffer)) != -1) { outputStream.write(buffer, 0, bytesRead); } socket.close(); ``` 7. 最后,根据ONVIF协议的要求,在发送流之前需要进行一些协商和配置。具体实现方法取决于使用的ONVIF库和设备。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值