# FFmpeg In Android - tutorial-2-Outputting to the Screen输出到屏幕

FFmpeg in Android 专栏收录该内容
16 篇文章 0 订阅

### SDL and Video SDL 和视频

To draw to the screen, we’re going to use SDL. SDL stands for Simple Direct Layer, and is an excellent library for multimedia, is cross-platform, and is used in several projects. You can get the library at the official website or you can download the development package for your operating system if there is one. You’ll need the libraries to compile the code for this tutorial (and for the rest of them, too).

SDL has many methods for drawing images to the screen, and it has one in particular that is meant for displaying movies on the screen - what it calls a YUV overlay. YUV (technically not YUV but YCbCr) *** A note: **There is a great deal of annoyance from some people at the convention of calling “YCbCr”,“YUV”. Generally speaking, YUV is an analog format and YCbCr is a digital format. ffmpeg and SDL both refer to YCbCr as YUV in their code and macros. is a way of storing raw image data like RGB. Roughly speaking, Y is the
brightness (or “luma”) component, and U and V are the color components. (It’s more complicated than RGB because some of the color information is discarded, and you might have only 1 U and V sample for every 2 Y samples.) SDL’s YUV overlay takes in a raw array of YUV data and displays it. It accepts 4 different kinds of YUV formats, but YV12 is the fastest. There is another YUV
format called YUV420P that is the same as YV12, except the U and V arrays are switched. The 420 means it is subsampled at a ratio of 4:2:0, basically meaning there is 1 color sample for every 4 luma samples, so the color information is quartered. This is a good way of saving bandwidth, as the human eye does not percieve this change. The “P” in the name means that the format is “planar” – simply meaning that the Y, U, and V components are in separate arrays. ffmpeg can convert images to YUV420P, with the added bonus that many video streams are in that format already, or are easily converted to that format.
SDL 库中有许多种方式来在屏幕上绘制图形，而且它有一个特殊的方式来在屏幕上显示图像——这种方式叫做 YUV overlay。 YUV(从技术上来讲并不叫 YUV 而是叫做 YCbCr)其实是一种类似于 RGB 方式的存储原始图像的格式 ，很多人被YUV/YCbCr搞得很困惑。通常来讲，YUV是模拟格式，YCbCr是数字格式，ffmpeg和SDL对两者不加区别。 粗略的讲， Y 是亮度分量(luma)， U 和 V 是色度分量。（这种格式比 RGB 复杂的多，因为一些颜色信息被丢弃了，而且可以每两个 Y 采样点，只有一个 U 和一个 V 采样点）。 SDL 的 YUV ovelay使用一组原始的
YUV 数据并且在屏幕上显示出它们。它可以允许 4 种不同的 YUV 格式，但是其中的 YV12 是最快的一种。还有一个叫做 YUV420P 的 YUV 格式，它和 YV12 是一样的，除了 U 和 V 分量的位置被调换了以外。 420 意味着它以4:2:0 的比例进行了二次抽样，基本上就意味着 1 个颜色分量对应着 4 个亮度分量。所以它的色度信息只有原来的 1/4。这是一种节省带宽的好方式，因为人眼察觉不到这种变化。名称中的 P 表示这种格式是平面的——简单的说就是 Y， U 和 V 分量分别在不同的数组中。 FFMPEG 可以把图像格式转换为 YUV420P，但是现在很多视

So our current plan is to replace the ppm_save() function from Tutorial 1, and instead output our frame to the screen. But first we have to start by seeing how to use the SDL Library. First we have to include the libraries and initalize SDL:

 	#include <SDL2/SDL.h>

if(SDL_Init(SDL_INIT_VIDEO | SDL_INIT_AUDIO | SDL_INIT_TIMER)) {
fprintf(stderr, "Could not initialize SDL - %s\n", SDL_GetError());
exit(1);
}


SDL_Init() essentially tells the library what features we’re going to use. SDL_GetError(), of course, is a handy debugging function.
SDL_Init() 函数告诉了 SDL 库，哪些特性我们将要用到。当然 SDL_GetError() 是一个用来调试出错的函数。

### Creating a Display 创建一个显示

Now we need a place on the screen to put stuff. The basic area for displaying images with SDL is called a SDL_Window:

	//SDL 2.0 Support for multiple windows
screen = SDL_CreateWindow("Simplest ffmpeg player's Window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED,
screen_w, screen_h, SDL_WINDOW_OPENGL);

if(!screen) {
printf("SDL: could not create window - exiting:%s\n",SDL_GetError());
return -1;
}


This sets up a screen with the given width and height. The next option is the bit depth of the screen - 0 is a special value that means “same as the current display”. (This does not work on OS X; see source.)

	sdlRenderer = SDL_CreateRenderer(screen, -1, SDL_RENDERER_ACCELERATED);

sdlTexture = SDL_CreateTexture(sdlRenderer, SDL_PIXELFORMAT_IYUV, SDL_TEXTUREACCESS_STREAMING,
video_dec_ctx->width, video_dec_ctx->height);


### Displaying the Image 显示图像

Well that was simple enough! Now we just need to display the image. Let’s go all the way down to where we had our finished frame. We can get rid of all that stuff we had for the RGB frame, and we’re going to replace the ppm_save() with our display code.

	sws_scale(sws_ctx, (const uint8_t* const*)frame->data, frame->linesize, 0,
video_dec_ctx->height, video_dst_data, video_dst_linesize);

SDL_UpdateYUVTexture(sdlTexture, &sdlRect,
video_dst_data[0], video_dst_linesize[0],
video_dst_data[1], video_dst_linesize[1],
video_dst_data[2], video_dst_linesize[2]);

SDL_RenderClear( sdlRenderer );
SDL_RenderCopy( sdlRenderer, sdlTexture,  NULL, &sdlRect);
SDL_RenderPresent( sdlRenderer );


Now our video is displayed! 现在我们的视频显示出来了！

Let’s take this time to show you another feature of SDL: its event system. SDL is set up so that when you type, or move the mouse in the SDL application, or send it a signal, it generates an event. Your program then checks for these events if it wants to handle user input. Your program can also make up events to send the SDL event system. This is especially useful when multithread programming with SDL, which we’ll see in Tutorial 4. In our program, we’re going to poll for events right after we finish processing a
packet. For now, we’re just going to handle the SDL_QUIT event so we can exit:

		SDL_Event       event;

av_free_packet(&packet;);
SDL_PollEvent(&event;);
switch(event.type) {
case SDL_QUIT:
SDL_Quit();
exit(0);
break;
default:
break;
}


And there we go! Get rid of all the old cruft, and you’re ready to compile.

g++ -std=c++14 -o tuturial03 tutorial03.cpp -I/INCLUDE_PATH -L/LIB_PATH -lavutil -lavformat -lavcodec -lswscale -lswresample -lavdevice -lz -lavutil -lm -lpthread -ldl

• 0
点赞
• 0
评论
• 0
收藏
• 扫一扫，分享海报

08-21 1062

04-21 341
09-01 271
12-13 1865
12-25 5096
08-02 5010
11-24 957
11-15 4754
02-18 534
02-15 346
10-22 3787
06-05 312
04-13 180