android MediaPlayer播放音视频流程

在播放一个本地音视频文件或网络传输的音视频流时,apk中一般会调用类似如下代码(本文以播放一个网络视频流为例进行分析):

MediaPlayer mp = new MediaPlayer();(1)   //创建一个播放器

mp.setDataSource("rtsp://10.0.149.217:554/stream1"); (2)//参数指定路径或url   

mp.prepare();  (3) 

mp.start(); (4)

上面的代码中总共有4条语句,本文重点分析(1)和(2),它涉及到播放器是如何一层层调用到多媒体框架中去.

我们先看(1)的调用流程:

public MediaPlayer() {
        Looper looper;
        if ((looper = Looper.myLooper()) != null) {
            mEventHandler = new EventHandler(this, looper); //EventHandler用于接收native层发送过来的消息
        } else if ((looper = Looper.getMainLooper()) != null) {
            mEventHandler = new EventHandler(this, looper);
        } else {
            mEventHandler = null;
        }


        mTimeProvider = new TimeProvider(this);
        mOutOfBandSubtitleTracks = new Vector<SubtitleTrack>();
        mOpenSubtitleSources = new Vector<InputStream>();
        mInbandSubtitleTracks = new SubtitleTrack[0];


        /* Native setup requires a weak reference to our object.
         * It's easier to create it here than in C++.
         */
        native_setup(new WeakReference<MediaPlayer>(this));
    }


native_setup()的jni层实现函数为android_media_MediaPlayer_native_setup().定义在android_media_MediaPlayer.cpp中.

static void
android_media_MediaPlayer_native_setup(JNIEnv *env, jobject thiz, jobject weak_this)
{
    ALOGV("native_setup");
    sp<MediaPlayer> mp = new MediaPlayer(); //此处创建了一个native层的MediaPlayer对象
    if (mp == NULL) {
        jniThrowException(env, "java/lang/RuntimeException", "Out of memory");
        return;
    }


    // create new listener and give it to MediaPlayer
    sp<JNIMediaPlayerListener> listener = new JNIMediaPlayerListener(env, thiz, weak_this);
    mp->setListener(listener);


    // Stow our new C++ MediaPlayer in an opaque field in the Java object.
    setMediaPlayer(env, thiz, mp);
}


static sp<MediaPlayer> setMediaPlayer(JNIEnv* env, jobject thiz, const sp<MediaPlayer>& player)
{
    Mutex::Autolock l(sLock);
    sp<MediaPlayer> old = (MediaPlayer*)env->GetIntField(thiz, fields.context);
    if (player.get()) {
        player->incStrong((void*)setMediaPlayer);
    }
    if (old != 0) {
        old->decStrong((void*)setMediaPlayer);
    }


    env->SetIntField(thiz, fields.context, (int)player.get()); //此处将创建的native层的MediaPlayer对象保存到fields对象的context成员中.
    return old;
}


我们再看(2)的流程:

java层MediaPlayer的setDataSource方法被重载了多次,主要是设置播放音视频的路径不同,但最终都调用到native方法_setDataSource(),它对应的jni实现为

static void
android_media_MediaPlayer_setDataSourceFD(JNIEnv *env, jobject thiz, jobject fileDescriptor, jlong offset, jlong length)
{
    sp<MediaPlayer> mp = getMediaPlayer(env, thiz); 
    if (mp == NULL ) {
        jniThrowException(env, "java/lang/IllegalStateException", NULL);
        return;
    }


    if (fileDescriptor == NULL) {
        jniThrowException(env, "java/lang/IllegalArgumentException", NULL);
        return;
    }
    int fd = jniGetFDFromFileDescriptor(env, fileDescriptor);
    ALOGV("setDataSourceFD: fd %d", fd);
    process_media_player_call( env, thiz, mp->setDataSource(fd, offset, length), "java/io/IOException", "setDataSourceFD failed." );
}

其中getMediaPlayer方法的实现为:

static sp<MediaPlayer> getMediaPlayer(JNIEnv* env, jobject thiz)
{
    Mutex::Autolock l(sLock);
    MediaPlayer* const p = (MediaPlayer*)env->GetIntField(thiz, fields.context); //返回之前创建的保存在fields.context种的MediaPlayer对象.
    return sp<MediaPlayer>(p);
}

process_media_player_call(JNIEnv *env, jobject thiz, status_t opStatus, const char* exception, const char *message)方法主要根据第三个参数的值来判断是否需要抛出

相应异常到java层,我们重点关注第三个参数mp->setDataSource(fd, offset, length),此处的mp为native层的MediaPlayer对象,MediaPlayer对象的SetDataSource有三个重载方法,

流程基本相同,我们分析网络视频播放.

status_t MediaPlayer::setDataSource(
        const char *url, const KeyedVector<String8, String8> *headers)
{
    ALOGV("setDataSource(%s)", url);
    status_t err = BAD_VALUE;
    if (url != NULL) {
        const sp<IMediaPlayerService>& service(getMediaPlayerService());//(1)
        if (service != 0) {
            sp<IMediaPlayer> player(service->create(this, mAudioSessionId));//(2)
            if ((NO_ERROR != doSetRetransmitEndpoint(player)) ||
                (NO_ERROR != player->setDataSource(url, headers))) { //(3)
                player.clear();
            }


            //* save properties before creating the real player 
            if(player != 0) {
            player->setSubColor(mSubColor);
            player->setSubFrameColor(mSubFrameColor);
            player->setSubPosition(mSubPosition);
            player->setSubDelay(mSubDelay);
            player->setSubFontSize(mSubFontSize);
            player->setSubCharset(mSubCharset);
            player->switchSub(mSubIndex);
            player->switchTrack(mTrackIndex);
                player->setChannelMuteMode(mMuteMode); 
       }
            err = attachNewPlayer(player);
        }
    }
    return err;
}

上面(1)中getMediaPlayerService()返回的是MediaPlayerService在客户端的代理对象BpMediaPlayerService,

上面(2)中sp<IMediaPlayer> player(service->create(this, mAudioSessionId));语句中service->create(this, mAudioSessionId)调用的是BpMediaPlayerService->create(),

它的实现端为MediaPlayerService->create(this, mAudioSessionId).

sp<IMediaPlayer> MediaPlayerService::create(const sp<IMediaPlayerClient>& client,
        int audioSessionId)
{
    pid_t pid = IPCThreadState::self()->getCallingPid();
    int32_t connId = android_atomic_inc(&mNextConnId);


    sp<Client> c = new Client(
            this, pid, connId, client, audioSessionId,
            IPCThreadState::self()->getCallingUid());


    ALOGV("Create new client(%d) from pid %d, uid %d, ", connId, pid,
         IPCThreadState::self()->getCallingUid());

    c->setScreen(mScreen);

    wp<Client> w = c;
    {
        Mutex::Autolock lock(mLock);
        mClients.add(w);
    }
    return c;
}

从上面代码中可以看到该方法返回的是一个Client类实例,Client继承自BnMediaPlayer,此处可以看到Client实际上是一个匿名Binder实体(关于binder通信原理,请感兴趣的读者参考其它资料),这与SurfaceFlinger中创建的Client对象实现机理类似的.

Client类的构造函数代码如下,主要是对类的成员做一些初始化而以.

MediaPlayerService::Client::Client(
        const sp<MediaPlayerService>& service, pid_t pid,
        int32_t connId, const sp<IMediaPlayerClient>& client,
        int audioSessionId, uid_t uid)
{
    ALOGV("Client(%d) constructor", connId);
    mPid = pid;
    mConnId = connId;
    mService = service;
    mClient = client;
    mLoop = false;
    mStatus = NO_INIT;
    mAudioSessionId = audioSessionId;
    mUID = uid;
    mRetransmitEndpointValid = false;


#if CALLBACK_ANTAGONIZER
    ALOGD("create Antagonizer");
    mAntagonizer = new Antagonizer(notify, this);
#endif

}

根据上面分析,因此上面MediaPlayer::setDataSource()中的(3),实际上是调用Client::setDataSource(),它的实现代码如下:

status_t MediaPlayerService::Client::setDataSource(
        const char *url, const KeyedVector<String8, String8> *headers)
{
    ALOGV("setDataSource(%s)", url);
    if (url == NULL)
        return UNKNOWN_ERROR;


 //此处可以看到,如果播放的视频流是网络传输的,必须在apk的AndroidManifest.xml中申请android.permission.INTERNET权限,否则是无法播放成功的.
    if ((strncmp(url, "http://", 7) == 0) ||
        (strncmp(url, "https://", 8) == 0) ||
        (strncmp(url, "rtsp://", 7) == 0)) {
        if (!checkPermission("android.permission.INTERNET")) {
            return PERMISSION_DENIED;
        }
    }


    if (strncmp(url, "content://", 10) == 0) { //视频路径保存在数据库中走此分支
        // get a filedescriptor for the content Uri and
        // pass it to the setDataSource(fd) method

        String16 url16(url);
        int fd = android::openContentProviderFile(url16);
        if (fd < 0)
        {
            ALOGE("Couldn't open fd for %s", url);
            return UNKNOWN_ERROR;
        }
        setDataSource(fd, 0, 0x7fffffffffLL); // this sets mStatus
        close(fd);
        return mStatus;
    } else { 

//播放网络视频走此分支
        player_type playerType = MediaPlayerFactory::getPlayerType(this, url);//(1)
        sp<MediaPlayerBase> p = setDataSource_pre(playerType);//(2)

        if (p == NULL) {
            return NO_INIT;
        }
        
        //* save properties before creating the real player 
        p->setSubGate(mSubGate);
        p->setSubColor(mSubColor);
        p->setSubFrameColor(mSubFrameColor);
        p->setSubPosition(mSubPosition);
        p->setSubDelay(mSubDelay);
        p->setSubFontSize(mSubFontSize);
        p->setSubCharset(mSubCharset);
p->switchSub(mSubIndex);
p->switchTrack(mTrackIndex);
        p->setChannelMuteMode(mMuteMode); // 2012-03-07, set audio channel mute

      
        setDataSource_post(p, p->setDataSource(url, headers));
        return mStatus;
    }
}

我们重点关注上面函数中红色字体的两条语句(1)和(2),其中MediaPlayerFactory::getPlayerType()的代码实现如下:

player_type MediaPlayerFactory::getPlayerType(const sp<IMediaPlayer>& client,
                                              const char* url) {
    //GET_PLAYER_TYPE_IMPL(client, url);
    ALOGV("MediaPlayerFactory::getPlayerType: url = %s", url);
    return android::getPlayerType_l(url);
}

它直接调用android::getPlayerType_l(url);

该方法的实现如下

player_type getPlayerType_l(const char* url)
{

//省略部分判断条件代码

    int lenURL = strlen(url);
    int len;
    int start;
    for (int i = 0; i < NELEM(FILE_EXTS); ++i) {
        len = strlen(FILE_EXTS[i].extension);
        start = lenURL - len;
        if (start > 0) {
            if (!strncasecmp(url + start, FILE_EXTS[i].extension, len)) {
                return FILE_EXTS[i].playertype;
            }
        }
    }


    //MP4 AUDIO ONLY DETECT
    if (strstr(url, "://") == NULL) {
for (int i = 0; i < NELEM(MP4A_FILE_EXTS); ++i) {
len = strlen(MP4A_FILE_EXTS[i].extension);
start = lenURL - len;
if (start > 0) {
if (!strncasecmp(url + start, MP4A_FILE_EXTS[i].extension, len)) {
if (MovAudioOnlyDetect0(url))
return STAGEFRIGHT_PLAYER;
}
}
}
    }
    
    return CEDARX_PLAYER;
}

上面的两个for循环遍历了两个数组FILE_EXTS和MP4A_FILE_EXTS,它们的定义分别如下:

extmap FILE_EXTS [] =  {
{".ogg",  STAGEFRIGHT_PLAYER},
{".mp3",  STAGEFRIGHT_PLAYER},
        {".wav",  STAGEFRIGHT_PLAYER},
        {".flac", STAGEFRIGHT_PLAYER},
{".amr",  STAGEFRIGHT_PLAYER},
{".m4a",  STAGEFRIGHT_PLAYER},
{".m4r",  STAGEFRIGHT_PLAYER},
{".out",  CEDARX_PLAYER},
//{".3gp",  STAGEFRIGHT_PLAYER},
        //{".aac",  STAGEFRIGHT_PLAYER},
            
        {".mid",  SONIVOX_PLAYER},
        {".midi", SONIVOX_PLAYER},
        {".smf",  SONIVOX_PLAYER},
        {".xmf",  SONIVOX_PLAYER},
        {".mxmf", SONIVOX_PLAYER},
        {".imy",  SONIVOX_PLAYER},
        {".rtttl",SONIVOX_PLAYER},
        {".rtx",  SONIVOX_PLAYER},
        {".ota",  SONIVOX_PLAYER},
            
        {".ape",  CEDARA_PLAYER},
        {".ac3",  CEDARA_PLAYER},
        {".dts",  CEDARA_PLAYER},
        {".wma",  CEDARA_PLAYER},
        {".aac",  CEDARA_PLAYER},
        {".mp2",  CEDARA_PLAYER},
        {".mp1",  CEDARA_PLAYER},
        //{".wav",  CEDARA_PLAYER},
        //{".flac", CEDARA_PLAYER},
};


extmap MP4A_FILE_EXTS [] =  {
{".m4a", CEDARX_PLAYER},
{".m4r", CEDARX_PLAYER},
{".3gpp", CEDARX_PLAYER},
};

实际上就是设置了对应音视频文件的扩展名与播放器类型的映射关系.因此如果要添加新的文件类型或者修改对应文件采用的播放器类型,可以修改这两个数组即可.

如果这两个数组都没有找到对应文件类型,就返回CEDARX_PLAYER类型.由于我播放的是视频文件,且没有指定扩展名类型,因此返回的是CEDARX_PLAYER.

我们在看sp<MediaPlayerBase> p = setDataSource_pre(playerType);//(2)的实现:

sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre(
        player_type playerType)
{
    ALOGV("player type = %d", playerType);


    // create the right type of player
    sp<MediaPlayerBase> p = createPlayer(playerType);
    if (p == NULL) {
        return p;
    }


    if (!p->hardwareOutput()) {
        mAudioOutput = new AudioOutput(mAudioSessionId, IPCThreadState::self()->getCallingUid());
        static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);
    }

    return p;
}

其中的createPlayer()实现如下:

sp<MediaPlayerBase> MediaPlayerService::Client::createPlayer(player_type playerType)
{
    // determine if we have the right player type
    sp<MediaPlayerBase> p = mPlayer;
    if ((p != NULL) && (p->playerType() != playerType)) {
        ALOGV("delete player");
        p.clear();
    }

//第一次调用时p为空,走如下分支.
    if (p == NULL) {
        p = MediaPlayerFactory::createPlayer(playerType, this, notify);
    }


    if (p != NULL) {
        p->setUID(mUID);
    }


    return p;
}


sp<MediaPlayerBase> MediaPlayerFactory::createPlayer(
        player_type playerType,
        void* cookie,
        notify_callback_f notifyFunc) {
    sp<MediaPlayerBase> p;
    IFactory* factory;
    status_t init_result;
    Mutex::Autolock lock_(&sLock);

   /*

sFactoryMap的定义为static tFactoryMap sFactoryMap;它的类型为tFactoryMap,

typedef KeyedVector<player_type, IFactory*> tFactoryMap;

因此sFactoryMap是一个建值对的向量容器,下面语句就是根据播放类型来找到对应的值,肯定能找到,否则就失败了.

*/
    if (sFactoryMap.indexOfKey(playerType) < 0) {
        ALOGE("Failed to create player object of type %d, no registered"
              " factory", playerType);
        return p;
    }


    factory = sFactoryMap.valueFor(playerType);
    CHECK(NULL != factory);
    p = factory->createPlayer();


    if (p == NULL) {
        ALOGE("Failed to create player object of type %d, create failed",
               playerType);
        return p;
    }


    init_result = p->initCheck();
    if (init_result == NO_ERROR) {
        p->setNotifyCallback(cookie, notifyFunc);
    } else {
        ALOGE("Failed to create player object of type %d, initCheck failed"
              " (res = %d)", playerType, init_result);
        p.clear();
    }


    return p;
}

上面sFactoryMap.valueFor(playerType);返回肯定不为空,否则无法创建播放器,那么sFactoryMap是在哪里初始化的呢?

限于篇幅关系,我只列出最关键代码,它是在MediaPlayerService的构造函数中调用MediaPlayerFactory::registerBuiltinFactories();进行初始化的.

void MediaPlayerFactory::registerBuiltinFactories() {
    Mutex::Autolock lock_(&sLock);


    if (sInitComplete)
        return;
    registerFactory_l(new CedarXPlayerFactory(), CEDARX_PLAYER);
    registerFactory_l(new CedarAPlayerFactory(), CEDARA_PLAYER);
    registerFactory_l(new StagefrightPlayerFactory(), STAGEFRIGHT_PLAYER);
    registerFactory_l(new NuPlayerFactory(), NU_PLAYER);
    registerFactory_l(new SonivoxPlayerFactory(), SONIVOX_PLAYER);
    registerFactory_l(new TestPlayerFactory(), TEST_PLAYER);

    sInitComplete = true;
}

上面的每条语句registerFactory_l()实际上都是往sFactoryMap中添加一个成员,而CedarXPlayerFactory,CedarAPlayerFactory等都是集成自MediaPlayerFactory::IFactory,

此处采用的是工厂方法设计模式.

因此根据上面分析,CEDARX_PLAYER对应的就是CedarXPlayerFactory类实例,那么p = factory->createPlayer();此处的factory是CedarXPlayerFactory,因此此处的p就是

一个CedarPlayer类实例.参考代码如下:


class CedarXPlayerFactory : public MediaPlayerFactory::IFactory {
  public:
    virtual float scoreFactory(const sp<IMediaPlayer>& client,
                               int fd,
                               int64_t offset,
                               int64_t length,
                               float curScore) {


        return 0.0;
    }

    virtual sp<MediaPlayerBase> createPlayer() {
        ALOGV(" create CedarXPlayer");
        return new CedarPlayer();
    }
};

我们再看CedarPlayer的构造函数:

CedarPlayer::CedarPlayer()
    : mPlayer(new CedarXPlayer) {
    ALOGV("CedarPlayer");

    mPlayer->setListener(this);
}

在CedarPlayer构造函数初始化列表中又创建了一个CedarXPlayer对象并赋值给CedarPlayer的成员mPlayer.

其实CedarPlayer类中的方法实际上都是对CedarXPlayer相应方法的封装.也就是真正实现在CedarXPlayer类中.这是设计模式中的装饰模式的典型应用.

通过上面的代码分析我们总结一下:

1. java层的MediaPlayer对象通过jni实际调用到native层的MediaPlayer对象.

2.native层的MediaPlayer对象中的mPlayer指向MediaPlayerService::Client对象.

3.MediaPlayerService::Client类中的mPlayer根据具体播放类型实际指向不同的MediaPlayerBase对象.本文是指向CedarPlayer.

4.CedarPlayer类中的mPlayer实际指向CedarXPlayer对象.其它***Player对象中也都有一个mPlayer,例如StagefrightPlayer中的mPlayer指向AwesomePlayer.

这样java层的MediaPlayer类的相关方法最终是调用到CedarXPlayer类中的相关方法中.


已标记关键词 清除标记
©️2020 CSDN 皮肤主题: 编程工作室 设计师:CSDN官方博客 返回首页