libyuv库就不多介绍了,谷歌开源的处理YUV数据的库。听说性能比ffmpeg好,就尝试一下替换ffmpeg的scale功能。经过的测试确实好233
编译libyuv库
下载
git clone https://chromium.googlesource.com/libyuv/libyuv
谷歌的东西如果网速不太好可以在github导入镜像仓库然后从github下(也可以试试导入gitee,不过我试了下没成功)。
ndk-build编译
修改Android.mk文件,主要是注释掉libjpeg库的相关代码,不然会报找不到libjpeg库的错误。参考代码如下
# This is the Android makefile for libyuv for NDK.
LOCAL_PATH:= $(call my-dir)
include $(CLEAR_VARS)
LOCAL_CPP_EXTENSION := .cc
LOCAL_SRC_FILES := \
source/compare.cc \
source/compare_common.cc \
source/compare_gcc.cc \
source/compare_mmi.cc \
source/compare_msa.cc \
source/compare_neon.cc \
source/compare_neon64.cc \
source/compare_win.cc \
source/convert.cc \
source/convert_argb.cc \
source/convert_from.cc \
source/convert_from_argb.cc \
source/convert_to_argb.cc \
source/convert_to_i420.cc \
source/cpu_id.cc \
source/planar_functions.cc \
source/rotate.cc \
source/rotate_any.cc \
source/rotate_argb.cc \
source/rotate_common.cc \
source/rotate_gcc.cc \
source/rotate_mmi.cc \
source/rotate_msa.cc \
source/rotate_neon.cc \
source/rotate_neon64.cc \
source/rotate_win.cc \
source/row_any.cc \
source/row_common.cc \
source/row_gcc.cc \
source/row_mmi.cc \
source/row_msa.cc \
source/row_neon.cc \
source/row_neon64.cc \
source/row_win.cc \
source/scale.cc \
source/scale_any.cc \
source/scale_argb.cc \
source/scale_common.cc \
source/scale_gcc.cc \
source/scale_mmi.cc \
source/scale_msa.cc \
source/scale_neon.cc \
source/scale_neon64.cc \
source/scale_uv.cc \
source/scale_win.cc \
source/video_common.cc
common_CFLAGS := -Wall -fexceptions
# ifneq ($(LIBYUV_DISABLE_JPEG), "yes")
# LOCAL_SRC_FILES += \
# source/convert_jpeg.cc \
# source/mjpeg_decoder.cc \
# source/mjpeg_validate.cc
# common_CFLAGS += -DHAVE_JPEG
# LOCAL_SHARED_LIBRARIES := libjpeg
# endif
LOCAL_CFLAGS += $(common_CFLAGS)
LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/include
LOCAL_C_INCLUDES += $(LOCAL_PATH)/include
LOCAL_EXPORT_C_INCLUDE_DIRS := $(LOCAL_PATH)/include
LOCAL_MODULE := libyuv_static
LOCAL_MODULE_TAGS := optional
include $(BUILD_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_WHOLE_STATIC_LIBRARIES := libyuv_static
LOCAL_MODULE := libyuv
# ifneq ($(LIBYUV_DISABLE_JPEG), "yes")
# LOCAL_SHARED_LIBRARIES := libjpeg
# endif
include $(BUILD_SHARED_LIBRARY)
# include $(CLEAR_VARS)
# LOCAL_STATIC_LIBRARIES := libyuv_static
# LOCAL_SHARED_LIBRARIES := libjpeg
# LOCAL_MODULE_TAGS := tests
# LOCAL_CPP_EXTENSION := .cc
# LOCAL_C_INCLUDES += $(LOCAL_PATH)/include
# LOCAL_SRC_FILES := \
# unit_test/basictypes_test.cc \
# unit_test/color_test.cc \
# unit_test/compare_test.cc \
# unit_test/convert_test.cc \
# unit_test/cpu_test.cc \
# unit_test/cpu_thread_test.cc \
# unit_test/math_test.cc \
# unit_test/planar_test.cc \
# unit_test/rotate_argb_test.cc \
# unit_test/rotate_test.cc \
# unit_test/scale_argb_test.cc \
# unit_test/scale_test.cc \
# unit_test/scale_uv_test.cc \
# unit_test/unit_test.cc \
# unit_test/video_common_test.cc
# LOCAL_MODULE := libyuv_unittest
# include $(BUILD_NATIVE_TEST)
然后新建Application.mk文件,输入以下内容,根据自己需求自行删改
APP_ABI := arm64-v8a armeabi-v7a
APP_PLATFORM := android-21
APP_STL := c++_static
APP_CPPFLAGS += -fno-rtti
如果将包含Android.mk和Application.mk文件的根目录更名为jni,那可以直接使用ndk-build在jni同级目录进行编译,否则需要指定工程目录,参考示例如下
$ANDROID_NDK/ndk-build NDK_PROJECT_PATH=. APP_BUILD_SCRIPT=./Android.mk NDK_APPLICATION_MK=./Application.mk -j10
编译完成后会在运行ndk-build命令的目录生成libs和obj目录。so文件就在libs目录下。
AndroidStudio引入so库
和ffmpeg差不多,将libyuv的include文件夹下的libyuv文件夹和libyuv.h复制到项目的include文件夹下,将对应abi的libyuv.so复制过去,然后修改CMakeLists.txt示例如下
add_library(yuv
SHARED
IMPORTED)
set_target_properties(yuv
PROPERTIES IMPORTED_LOCATION
${DIR}/${CMAKE_ANDROID_ARCH_ABI}/libyuv.so)
target_link_libraries( # Specifies the target library.
native-lib
avcodec
avdevice
avfilter
avformat
avutil
postproc
swresample
swscale
android
OpenSLES
# Links the target library to the log library
# included in the NDK.
${log-lib}
yuv)
然后就可以在项目里正常使用libyuv相关的api。
yuv转RGBA
需要注意libyuv的RGBA跟android的nativeWindow需要的数据是相反的,类似字节的大小端,libyuv里的ABGR对应nativeWindow的RGBA。示例代码如下
if (matchYuv(frame->format)) {
yuvToARGB(frame, rgb_frame->data[0]);
} else {
sws_scale(swsContext, frame->data, frame->linesize, 0, frame->height,
rgb_frame->data, rgb_frame->linesize);
}
void yuvToARGB(AVFrame *sourceAVFrame, uint8_t *dst_rgba) {
switch (sourceAVFrame->format) {
case AV_PIX_FMT_YUV420P:
libyuv::I420ToABGR(sourceAVFrame->data[0], sourceAVFrame->linesize[0],
sourceAVFrame->data[1], sourceAVFrame->linesize[1],
sourceAVFrame->data[2], sourceAVFrame->linesize[2],
dst_rgba, sourceAVFrame->width * 4,
sourceAVFrame->width,
sourceAVFrame->height);
break;
case AV_PIX_FMT_YUV422P:
libyuv::I422ToABGR(sourceAVFrame->data[0], sourceAVFrame->linesize[0],
sourceAVFrame->data[1], sourceAVFrame->linesize[1],
sourceAVFrame->data[2], sourceAVFrame->linesize[2],
dst_rgba, sourceAVFrame->width * 4,
sourceAVFrame->width,
sourceAVFrame->height);
break;
case AV_PIX_FMT_YUV444P:
libyuv::I444ToABGR(sourceAVFrame->data[0], sourceAVFrame->linesize[0],
sourceAVFrame->data[1], sourceAVFrame->linesize[1],
sourceAVFrame->data[2], sourceAVFrame->linesize[2],
dst_rgba, sourceAVFrame->width * 4,
sourceAVFrame->width,
sourceAVFrame->height);
break;
default:
break;
}
}
实际对比了一下ffmpeg的sws_scale耗时大概16ms左右,libyuv耗时大概3ms左右。