自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(12)
  • 资源 (35)
  • 收藏
  • 关注

原创 qt程序发布时出现的bug

qt的exe程序在缺少platforms\qwindows.dll和libEGL.dll时,运行exe时没有任何反应,也不会报错。

2018-01-31 10:20:44 517

原创 USB安装win7的时候出现“选择要安装的驱动程序”

USB安装win7的时候出现“选择要安装的驱动程序”的解决办法1)用pe启动2)用pe里的“安装windows”安装usb盘里的sources\install.vim3)拔掉usb盘后重新启动

2018-01-27 14:36:16 26695 1

转载 WebRTC中丢包重传NACK实现分析

WebRTC中丢包重传NACK实现分析 weizhenwei 关注2016.11.07 20:46* 字数 1590 阅读 4584评论 3喜欢 15在WebRTC中,前向纠错(FEC)和丢包重传(NACK)是抵抗网络错误的重要手段。FEC在发送端将数据包添加冗余纠错码,纠错码连同数据包一起发送到接收端;接收端根据纠错码对数据进行检查和纠正。RFC510

2018-01-24 16:38:06 194

转载 WebRTC 基于GCC的拥塞控制(下)

本文在文章[1]的基础上,从源代码实现角度对WebRTC的GCC算法进行分析。主要内容包括: RTCP RR的数据源、报文构造和接收,接收端基于数据包到达延迟的码率估计,发送端码率的计算以及生效于目标模块。拥塞控制是实时流媒体应用的重要服务质量保证。通过本文和文章[1][2],从数学基础、算法步骤到实现细节,对WebRTC的拥塞控制GCC算法有一个全面深入的理解,为进一步学习WebRTC奠

2018-01-24 16:36:25 359

转载 英特尔WecRTC解决方案加速实施通信发展

https://software.intel.com/en-us/webrtc-sdk点击打开链接http://webrtc.org.cn/2018-expectations/点击打开链接在今年的冬季达沃斯经济论坛上,互联网巨头公司谷歌的执行董事长艾瑞克·施密特曾在座谈会上大胆预言:在未来的世界中,当下的互联网形态将会消失。随之形成的,将是一个高度个

2018-01-23 08:21:25 521

转载 从发布新版本 看Intel为何投资WebRTC?

从发布新版本 看Intel为何投资WebRTC?复制链接打印2015年11月13日00:12 | 我来说两句(人参与) | 保存到博客  IT168 评论双11区间通常都是发布会比较集中的时段。昨天,就在大部分媒体同行去参加甲骨文云技术应用大会时,老鱼选择了一场较为小型的

2018-01-22 15:05:42 881

转载 让我们看看目前的一些做视频会议厂家在WebRTC上做了哪些事?

让我们看看目前的一些做视频会议厂家在WebRTC上做了哪些事?  一、Avaya  Avaya在视频会议领域的成就源自它对RADVISION的收购(全球领先的IP、3G和IMS网络视频通信提供商)。  登录Avaya网站,进入“视频会议产品”版块(主要是Avaya Scopia),就会发现他们基本是通过硬件提升来实现的。正如网站上声称的那样,你可以“体验到最流畅的视频质量,分辨率可

2018-01-17 08:40:32 1452

转载 国内外做视频会议比较牛的公司有哪些?

国内外做视频会议比较牛的公司有哪些?关注者45被浏览8,805关注问题写回答​1 条评论​分享​邀请回答​22 个回答默

2018-01-17 08:30:01 22159 1

转载 GIPS宣布支持用于桌面视频会议的H.264 SVC可扩展视频编码方案

IP多媒体处理解决方案领先供应商Global IP Solutions (GIPS) 宣布支持H.264的可扩展视频编码 (Scalable Video Coding, SVC) 实施方案,将H.264 SVC集成进GIPS视频引擎中,让视频会议提供商能够按着每个用户有效带宽容量,使用最少的带宽提供无可比拟的视频质量。  支持H.264 SVC的GIPS VideoEngine 产品,能够使多

2018-01-17 08:18:18 531

转载 Implementing SVC in WebRTC

Implementing SVC in WebRTCA new technical approach can provide large-scale group WebRTC video conferencing despite WebRTC's lack of native support for Scalable Video Coding.One of the main issue

2018-01-17 08:17:17 666

转载 vs2010 vs2013统一修改所有工程的目录配置

vs2005和vs2008中都是通过 工具-选项-项目和解决方案-VC++目录,设置 头文件include 、库文件lib、可执行文件dll的路径,以便在引用dll动态链接库文件时,可以查找到该文件的绝对位置。但是vs2010中只能在工程-属性中,单个工程修改,而且这种修改不能永久使用,比较麻烦,因此在vs2010中使用了新方法,如下所示:视图——其他窗口——属性管理器 ->展开全部 ->

2018-01-03 11:23:07 4436

转载 可以用WebRTC来做视频直播吗?

首页发现话题登录加入知乎可以用WebRTC来做视频直播吗?关注问题写回答视频HTML5视频直播WebRTC可以用

2018-01-02 16:28:51 3325 1

封装好的overlay

封装好的overlay 要解决tearing的问题,用overlay

2012-08-28

overlay demo

overlay demo

2012-08-15

x264-2009-vc9.0.rar

可以用vc调试的x264 x264-2009-vc9.0.rar

2012-08-15

insight-7.3.50.20110803-cvs-src.tar

configure make make install (gdb version 7.3)

2012-05-16

windows gdb 可视化 调试 insight mingw

windows gdb 可视化 调试 insight mingw 1 运行wish84 2 在wish84的console中运行insight

2012-05-16

video osd yuv alpha

video osd yuv alpha

2012-02-17

x264-intel IPP 比较.rar

x264-intel IPP 比较.rar

2012-02-07

ffmpeg vc project

ffmpeg 移植到vc下的工程 ffmpeg vc project

2012-02-06

ffmpeg-2012-demo.rar

最新的ffmpeg h264 demo

2012-02-06

提取最新的ffmpeg h264并测试

提取最新的ffmpeg h264并测试

2012-02-06

rtsp 流测试工具

rtsp 流测试工具

2012-02-01

测试coreavc解码速度的工具

测试coreavc解码速度的工具

2012-01-31

h.264 decoder and play yuv

h264 解码 yuv directdraw 播放 play

2012-01-13

ffmpeg 0.9 h264 decoder demo

ffmpeg 0.9 h264 decoder demo

2012-01-12

h.264 测试序列

h.264 测试序列

2012-01-05

h.264 decoder demo

h.264 decoder demo

2012-01-05

h.264率失真优化

h.264 率失真优化率失真优化率失真优化率失真优化率失真优化率失真优化率失真优化率失真优化

2010-07-19

MobaXterm.rar

MobaXterm

2020-03-20

android 播放 pcm

android 播放 pcm

2017-04-21

安卓视频工具

安卓视频工具

2017-03-29

ffmpeg dxva gpu 解码的完整demo

ffmpeg dxva gpu 解码的完整demo,下载后即可顺利编译运行

2016-08-31

ffmpeg demo 2016

ffmpeg demo 2016

2016-08-16

x264 日记 

x264 blog x264 作者的博客

2016-05-18

从ffmpeg中提取出来的h264解码源代码 (含编译环境)3

C:\MinGW\msys\1.0\home\Administrator\h264

2016-04-14

从ffmpeg中提取出来的h264解码源代码 (含编译环境) 2

C:\MinGW\msys\1.0\home\Administrator\h264

2016-04-14

从ffmpeg中提取出来的h264解码源代码 (含编译环境)

C:\MinGW\msys\1.0\home\Administrator\h264

2016-04-14

MP4查看工具 QTAtomViewer.exe

MP4查看工具 QTAtomViewer.exe

2014-04-18

COM(activex)使用自定义类型传递数据

COM,activex使用自定义类型传递数据

2014-04-08

爱宝(电脑限时软件)

为了控制小孩使用电脑,自己写的一个小软件。 电脑限时软件,可以设置每隔一段时间休息几分钟,用于保护儿童的眼睛

2013-08-29

directshow msdn

directshow msdn 帮助 user manual

2013-08-28

MPEG-PS 流 打包 解包

MPEG-PS 流 打包 解包

2013-08-05

iphone h.264 live encode 实时 硬编码

Hardware Video Encoding on iPhone — RTSP Server example On iOS, the only way to use hardware acceleration when encoding video is to use AVAssetWriter, and that means writing the compressed video to file. If you want to stream that video over the network, for example, it needs to be read back out of the file. I’ve written an example application that demonstrates how to do this, as part of an RTSP server that streams H264 video from the iPhone or iPad camera to remote clients. The end-to-end latency, measured using a low-latency DirectShow client, is under a second. Latency with VLC and QuickTime playback is a few seconds, since these clients buffer somewhat more data at the client side. The whole example app is available in source form here under an attribution license. It’s a very basic app, but is fully functional. Build and run the app on an iPhone or iPad, then use Quicktime Player or VLC to play back the URL that is displayed in the app. Details, Details When the compressed video data is written to a MOV or MP4 file, it is written to an mdat atom and indexed in the moov atom. However, the moov atom is not written out until the file is closed, and without that index, the data in mdat is not easily accessible. There are no boundary markers or sub-atoms, just raw elementary stream. Moreover, the data in the mdat cannot be extracted or used without the data from the moov atom (specifically the lengthSize and SPS and PPS param sets). My example code takes the following approach to this problem: Only video is written using the AVAssetWriter instance, or it would be impossible to distinguish video from audio in the mdat atom. Initially, I create two AVAssetWriter instances. The first frame is written to both, and then one instance is closed. Once the moov atom has been written to that file, I parse the file and assume that the parameters apply to both instances, since the initial conditions were the same. Once I have the parameters, I use a dispatch_source object to trigger reads from the file whenever new data is written. The body of the mdat chunk consists of H264 NALUs, each preceded by a length field. Although the length of the mdat chunk is not known, we can safely assume that it will continue to the end of the file (until we finish the output file and the moov is added). For RTP delivery of the data, we group the NALUs into frames by parsing the NALU headers. Since there are no AUDs marking the frame boundaries, this requires looking at several different elements of the NALU header. Timestamps arrive with the uncompressed frames from the camera and are stored in a FIFO. These timestamps are applied to the compressed frames in the same order. Fortunately, the AVAssetWriter live encoder does not require re-ordering of frames. When the file gets too large, a new instance of AVAssetWriter is used, so that the old temporary file can be deleted. Transition code must then wait for the old instance to be closed so that the remaining NALUs can be read from the mdat atom without reading past the end of that atom into the subsequent metadata. Finally, the new file is opened and timestamps are adjusted. The resulting compressed output is seamless. A little experimentation suggests that we are able to read compressed frames from file about 500ms or so after they are captured, and these frames then arrive around 200ms after that at the client app. Rotation For modern graphics hardware, it is very straightforward to rotate an image when displaying it, and this is the method used by AVFoundation to handle rotation of the camera. The buffers are captured, encoded and written to file in landscape orientation. If the device is rotated to portrait mode, a transform matrix is written out to the file to indicate that the video should be rotated for playback. At the same time, the preview layer is also rotated to match the device orientation. This is efficient and works in most cases. However, there isn’t a way to pass this transform matrix to an RTP client, so the view on a remote player will not match the preview on the device if it is rotated away from the base camera orientation. The solution is to rotate the pixel buffers after receiving them from the capture output and before delivering them to the encoder. There is a cost to this processing, and this example code does not include this extra step.

2013-05-23

从ffmpeg中提取出来的h264解码源代码

花了一周时间从ffmpeg中提取出来的,本想研究一下h.264解码,后又束之高阁。 有缘者得之

2013-03-04

SSE4 intel pdf

SSE4 intel pdf

2012-11-01

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除