自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(11)
  • 资源 (35)
  • 收藏
  • 关注

转载 FEC/QOS中的NACK方案

一、FEC-QOS配合NACK的优势   by MediaPro(QQ:1821554146)ACK是大家比较熟悉的传输层保障措施,在类TCP的UDP传输方案中(UDT、KCP等)中通过优化ACK发送时机、更短的超时重传时间等措施来获得更高的数据吞吐率,但ACK并不适合对实时要求极高的直播互动领域。NACK与ACK不同,它是在没有收到数据包时向远端请求重传,因而更加适合

2017-12-25 14:35:49 2535

转载 使用FEC改善UDP(RTP)音视频传输效果

实时音视频领域UDP才是王道              在 Internet 上进行音视频实时互动采用的传输层方案有TCP(如:RTMP)和UDP(如:RTP)两种。TCP协议能为两个端点间的数据传输提供相对可靠的保障,这种保障是通过一个握手机制实现的。当数据传给接收者时,接收者要检查数据的正确性。发送者只有接到接收者的正确性认可才能发送下一个数据块。如果没有接到确认报文,这个数据块就得重

2017-12-25 14:13:29 1093

转载 终极解决VS2015 安装失败问题,如 安装包损坏或丢失

http://download.csdn.net/download/k0000000r/9252197下载后,双击证书 -- 安装证书 -- 然后按下图操作,注意一定要选择 “受信任的根证书颁发机构” 3、开始安装VS2015,双击安装程序。在安装过程中如再出现 “安装包损坏或丢失”时,先看下提示的是哪个文件,然后关闭VS安装程序,打开安

2017-12-25 13:59:43 33171 1

转载 “max”: 不是“std”的成员 如何解决?

标准库在头中定义了两个模板函数std::min() 和 std::max()。通常用它可以计算某个值对的最小值和最大值。可惜在 Visual C++ 无法使用它们,因为没有定义这些函数模板。原因是名字min和max与中传统的min/max宏定义有冲突。为了解决这个问题,Visual C++ 定义了另外两个功能相同的模板:_cpp_min() 和 _cpp_max()。我们可以用它们来代替std::

2017-12-19 14:44:04 12739

转载 实时更改x264的编码参数

Quote: 引用 11 楼 dengzikun 的回复:Quote: 引用 9 楼 zjm208 的回复:那真是奇怪了啊!我用x264_encoder_reconfig()怎么就没变化呢?重新调用x264_encoder_open()才有变化。我用的是ABR,你说的那些参数都设置了。编码过程中,我修改了i_bitrate或i_rc_method,但没有任何影响。能

2017-12-19 14:43:07 976

转载 webrtc最新代码

http://download.csdn.net/download/dusong7/9904631

2017-12-13 16:02:00 454

转载 WebRTC中传输层优化

http://www.ucpaas.com/news/201706199.htmlhttp://www.jianshu.com/p/0f7ee0e0b3behttp://www.jianshu.com/p/5259a8659112http://www.jianshu.com/p/a7f6ec0c9273http://www.jianshu.com/p/06a27ebacec7

2017-12-13 15:26:29 1392

转载 webrtc中的网络反馈与控制

转自编风网 http://befo.io/4206.html一、引言站在风口上,猪都能飞起来。雷布斯的这句名言,已经被大家传的家喻户晓了,说起当下站在风口上的猪,除了丁老板的未央猪,这头实实在在的猪,视频直播应该可以算一个。今年各种直播平台,各个轮次的融资消息应接不暇。对于互联网技术从业者来说,RTC(Real Time Communication,实时通信)这个站着视频直

2017-12-09 09:38:11 244

原创 ffmpeg支持android硬件解码mediacodec

1)编译配置 ./configure --enable-cross-compile --cross-prefix=/e/arm-linux-androideabi-4.6/bin/arm-linux-ardroideabi- --sysroot=/e/android/android-ndk-r8b/platforms/android-14/arch-arm --extra-cflags

2017-12-04 16:42:10 2425 2

原创 在eclipse中断点调试jni

1)安装Eclipse IDE for C/C++ Developers    Version:Oxygen Release (4.7.0) Build id:20170620-18002)eclipse中菜单help - Install new software 安装ADT-23.0.0.zip (104001830 bytes)3)eclipse中菜单window - Prefer

2017-12-04 15:17:48 676 2

原创 选择QT Creator还是VC + qt addin

兜兜转转又要做界面了,其实做jni开发也很无聊,调试起来太不爽了。闲话少说,用qt做开发时该选择QT Creator还是VC + qt addin,网上众说纷纭,让人无所适从。自己摸索了一段时间之后,答案也逐渐清晰。插一句,开始选方案时因为不想学新东西了,还倾向于mfc+duilib,用了个一个月的qt之后,庆幸没有抱残守缺,彻底抛弃mfc真是太明智了。如果只是做界面,毫无疑问

2017-12-04 14:41:14 2087

封装好的overlay

封装好的overlay 要解决tearing的问题,用overlay

2012-08-28

overlay demo

overlay demo

2012-08-15

x264-2009-vc9.0.rar

可以用vc调试的x264 x264-2009-vc9.0.rar

2012-08-15

insight-7.3.50.20110803-cvs-src.tar

configure make make install (gdb version 7.3)

2012-05-16

windows gdb 可视化 调试 insight mingw

windows gdb 可视化 调试 insight mingw 1 运行wish84 2 在wish84的console中运行insight

2012-05-16

video osd yuv alpha

video osd yuv alpha

2012-02-17

x264-intel IPP 比较.rar

x264-intel IPP 比较.rar

2012-02-07

ffmpeg vc project

ffmpeg 移植到vc下的工程 ffmpeg vc project

2012-02-06

ffmpeg-2012-demo.rar

最新的ffmpeg h264 demo

2012-02-06

提取最新的ffmpeg h264并测试

提取最新的ffmpeg h264并测试

2012-02-06

rtsp 流测试工具

rtsp 流测试工具

2012-02-01

测试coreavc解码速度的工具

测试coreavc解码速度的工具

2012-01-31

h.264 decoder and play yuv

h264 解码 yuv directdraw 播放 play

2012-01-13

ffmpeg 0.9 h264 decoder demo

ffmpeg 0.9 h264 decoder demo

2012-01-12

h.264 测试序列

h.264 测试序列

2012-01-05

h.264 decoder demo

h.264 decoder demo

2012-01-05

h.264率失真优化

h.264 率失真优化率失真优化率失真优化率失真优化率失真优化率失真优化率失真优化率失真优化

2010-07-19

MobaXterm.rar

MobaXterm

2020-03-20

android 播放 pcm

android 播放 pcm

2017-04-21

安卓视频工具

安卓视频工具

2017-03-29

ffmpeg dxva gpu 解码的完整demo

ffmpeg dxva gpu 解码的完整demo,下载后即可顺利编译运行

2016-08-31

ffmpeg demo 2016

ffmpeg demo 2016

2016-08-16

x264 日记 

x264 blog x264 作者的博客

2016-05-18

从ffmpeg中提取出来的h264解码源代码 (含编译环境)3

C:\MinGW\msys\1.0\home\Administrator\h264

2016-04-14

从ffmpeg中提取出来的h264解码源代码 (含编译环境) 2

C:\MinGW\msys\1.0\home\Administrator\h264

2016-04-14

从ffmpeg中提取出来的h264解码源代码 (含编译环境)

C:\MinGW\msys\1.0\home\Administrator\h264

2016-04-14

MP4查看工具 QTAtomViewer.exe

MP4查看工具 QTAtomViewer.exe

2014-04-18

COM(activex)使用自定义类型传递数据

COM,activex使用自定义类型传递数据

2014-04-08

爱宝(电脑限时软件)

为了控制小孩使用电脑,自己写的一个小软件。 电脑限时软件,可以设置每隔一段时间休息几分钟,用于保护儿童的眼睛

2013-08-29

directshow msdn

directshow msdn 帮助 user manual

2013-08-28

MPEG-PS 流 打包 解包

MPEG-PS 流 打包 解包

2013-08-05

iphone h.264 live encode 实时 硬编码

Hardware Video Encoding on iPhone — RTSP Server example On iOS, the only way to use hardware acceleration when encoding video is to use AVAssetWriter, and that means writing the compressed video to file. If you want to stream that video over the network, for example, it needs to be read back out of the file. I’ve written an example application that demonstrates how to do this, as part of an RTSP server that streams H264 video from the iPhone or iPad camera to remote clients. The end-to-end latency, measured using a low-latency DirectShow client, is under a second. Latency with VLC and QuickTime playback is a few seconds, since these clients buffer somewhat more data at the client side. The whole example app is available in source form here under an attribution license. It’s a very basic app, but is fully functional. Build and run the app on an iPhone or iPad, then use Quicktime Player or VLC to play back the URL that is displayed in the app. Details, Details When the compressed video data is written to a MOV or MP4 file, it is written to an mdat atom and indexed in the moov atom. However, the moov atom is not written out until the file is closed, and without that index, the data in mdat is not easily accessible. There are no boundary markers or sub-atoms, just raw elementary stream. Moreover, the data in the mdat cannot be extracted or used without the data from the moov atom (specifically the lengthSize and SPS and PPS param sets). My example code takes the following approach to this problem: Only video is written using the AVAssetWriter instance, or it would be impossible to distinguish video from audio in the mdat atom. Initially, I create two AVAssetWriter instances. The first frame is written to both, and then one instance is closed. Once the moov atom has been written to that file, I parse the file and assume that the parameters apply to both instances, since the initial conditions were the same. Once I have the parameters, I use a dispatch_source object to trigger reads from the file whenever new data is written. The body of the mdat chunk consists of H264 NALUs, each preceded by a length field. Although the length of the mdat chunk is not known, we can safely assume that it will continue to the end of the file (until we finish the output file and the moov is added). For RTP delivery of the data, we group the NALUs into frames by parsing the NALU headers. Since there are no AUDs marking the frame boundaries, this requires looking at several different elements of the NALU header. Timestamps arrive with the uncompressed frames from the camera and are stored in a FIFO. These timestamps are applied to the compressed frames in the same order. Fortunately, the AVAssetWriter live encoder does not require re-ordering of frames. When the file gets too large, a new instance of AVAssetWriter is used, so that the old temporary file can be deleted. Transition code must then wait for the old instance to be closed so that the remaining NALUs can be read from the mdat atom without reading past the end of that atom into the subsequent metadata. Finally, the new file is opened and timestamps are adjusted. The resulting compressed output is seamless. A little experimentation suggests that we are able to read compressed frames from file about 500ms or so after they are captured, and these frames then arrive around 200ms after that at the client app. Rotation For modern graphics hardware, it is very straightforward to rotate an image when displaying it, and this is the method used by AVFoundation to handle rotation of the camera. The buffers are captured, encoded and written to file in landscape orientation. If the device is rotated to portrait mode, a transform matrix is written out to the file to indicate that the video should be rotated for playback. At the same time, the preview layer is also rotated to match the device orientation. This is efficient and works in most cases. However, there isn’t a way to pass this transform matrix to an RTP client, so the view on a remote player will not match the preview on the device if it is rotated away from the base camera orientation. The solution is to rotate the pixel buffers after receiving them from the capture output and before delivering them to the encoder. There is a cost to this processing, and this example code does not include this extra step.

2013-05-23

从ffmpeg中提取出来的h264解码源代码

花了一周时间从ffmpeg中提取出来的,本想研究一下h.264解码,后又束之高阁。 有缘者得之

2013-03-04

SSE4 intel pdf

SSE4 intel pdf

2012-11-01

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除