1. 程式人生 > >Android(2.3+)原始碼分析MediaPlayer之RTSP

Android(2.3+)原始碼分析MediaPlayer之RTSP

在前面的部落格中有簡單介紹MediaPlayer,最近又開始研究這塊東西,在此把閱讀程式碼的理解記錄下來方便以後快速查閱。

播放普通檔案傳入的url是一個本地的絕對路徑,但是流媒體的話傳入的就是一個網路地址如以"http://“開頭的流媒體和以"rtsp://"開頭的流媒體協議。

下面從原始碼中的Awesomeplayer.cpp finishSetDataSource_l函式入手(也相當與mediaplayer呼叫了prepare後,開始做一些準備工作,如音訊流和視訊流都要準備好)

status_t AwesomePlayer::finishSetDataSource_l() {
    sp<DataSource> dataSource;

    bool isM3u8 = false;
    String8 surfix;
    surfix.setTo(mUri.string() + (mUri.size() - 5));
    isM3u8 = !strncasecmp(".m3u8", surfix, 5);
    if (!strncasecmp("http://", mUri.string(), 7) && (!isM3u8)) {
        mConnectingDataSource = new NuHTTPDataSource;

        mLock.unlock();
        status_t err = mConnectingDataSource->connect(mUri, &mUriHeaders);
...
} else if (!strncasecmp("rtsp://", mUri.string(), 7)) {
        if (mLooper == NULL) {
            mLooper = new ALooper;
            mLooper->setName("rtsp");
            mLooper->start();
        }
        mRTSPController = new ARTSPController(mLooper);
        status_t err = mRTSPController->connect(mUri.string());

        LOGI("ARTSPController::connect returned %d", err);

        if (err != OK) {
            mRTSPController.clear();
            return err;
        }

        sp<MediaExtractor> extractor = mRTSPController.get();
        return setDataSource_l(extractor);
    } 

首先awesomeplayer會對url進行解析,由此來區分建立不同的媒體提取器(MediaExtractor),相當於將mp4檔案開啟並且解析了它裡面有哪些流資訊(音,視訊,字幕等)而RTSP則當然是要和使用者所輸入的網路地址建立連線準備好隨時接受媒體資料。

重點是在ARTSPController的connect函式

status_t ARTSPController::connect(const char *url) {
    Mutex::Autolock autoLock(mLock);

    if (mState != DISCONNECTED) {
        return ERROR_ALREADY_CONNECTED;
    }

    sp<AMessage> msg = new AMessage(kWhatConnectDone, mReflector->id());

    mHandler = new MyHandler(url, mLooper);

    mState = CONNECTING;

    mHandler->connect(msg);

    while (mState == CONNECTING) {
        mCondition.wait(mLock);
    }

    if (mState != CONNECTED) {
        mHandler.clear();
    }

    return mConnectionResult;
}

可以看到函式裡面有一個

 while (mState == CONNECTING) {
        mCondition.wait(mLock);
    }

在等待狀態變成CONNECTED此函式才會返回。

我們需要清除狀態是如何變化的:

首先建立了一個sp<AMessage> msg = new AMessage(kWhatConnectDone, mReflector->id()); 物件型別是kWhatConnectDone,id是mReflector->id()

這個id十分關鍵,它確定了此訊息的最終接收物件,我們看mReflector的由來,

ARTSPController::ARTSPController(const sp<ALooper> &looper)
    : mState(DISCONNECTED),
      mLooper(looper),
      mSeekDoneCb(NULL),
      mSeekDoneCookie(NULL),
      mLastSeekCompletedTimeUs(-1) {
    mReflector = new AHandlerReflector<ARTSPController>(this);
    looper->registerHandler(mReflector);
}
它是在ARTSPController的建構函式中建立的,並且被註冊到了looper裡面,這個looper不知道的可以自己看下原始碼,簡單解釋下就是一個迴圈的訊息佇列,它裡面儲存了一個handler列表(每個handler有唯一的id,也就是上面我們提到的id),當訊息被觸發時就會通過id來找到對應的handler。

緊接著我們再看MyHandler的connect函式

void connect(const sp<AMessage> &doneMsg) {
        mDoneMsg = doneMsg;

        mLooper->registerHandler(this);
        mLooper->registerHandler(mConn);
        (1 ? mNetLooper : mLooper)->registerHandler(mRTPConn);

        sp<AMessage> notify = new AMessage('biny', id());
        mConn->observeBinaryData(notify);

        sp<AMessage> reply = new AMessage('conn', id());
        mConn->connect(mOriginalSessionURL.c_str(), reply);
    }
它傳入的引數實際就是上面的sp<AMessage> msg = new AMessage(kWhatConnectDone, mReflector->id());

也就是什麼時候完成連線的動作就會觸發此message

兩個重要的地方

sp<AMessage> notify = new AMessage('biny', id());
        mConn->observeBinaryData(notify);

void ARTSPConnection::observeBinaryData(const sp<AMessage> &reply) {
    sp<AMessage> msg = new AMessage(kWhatObserveBinaryData, id());
    msg->setMessage("reply", reply);
    msg->post();
}
此訊息通過id可以看出是傳送給ARTSPConnection自己的,注意setMessage引數reply,字面上理解是回覆,也就是說執行完此訊息後需要回復的呼叫reply,而這裡的reply就是

AMessage('biny', id());

void ARTSPConnection::onMessageReceived(const sp<AMessage> &msg) {
    switch (msg->what()) {
        case kWhatConnect:
            onConnect(msg);
            break;

        case kWhatDisconnect:
            onDisconnect(msg);
            break;

        case kWhatCompleteConnection:
            onCompleteConnection(msg);
            break;

        case kWhatSendRequest:
            onSendRequest(msg);
            break;

        case kWhatReceiveResponse:
            onReceiveResponse();
            break;

        case kWhatObserveBinaryData:
        {
            CHECK(msg->findMessage("reply", &mObserveBinaryMessage));
            break;
        }

        default:
            TRESPASS();
            break;
    }
}

在 ARTSPConnection::onMessageReceived可以找到kWhatObserveBinaryData,然後將reply的值儲存在了mObserveBinaryMessage中,此訊息在後面的receiveRTSPReponse會呼叫到。我們先看下一個重點的地方
sp<AMessage> reply = new AMessage('conn', id());
        mConn->connect(mOriginalSessionURL.c_str(), reply);

這個才是連線的關鍵
首先reply繫結的id是MyHandler的id,也就是說最終會回到MyHandler的onMessageReceived  case 'conn':中來。

void ARTSPConnection::connect(const char *url, const sp<AMessage> &reply) {
    sp<AMessage> msg = new AMessage(kWhatConnect, id());
    msg->setString("url", url);
    msg->setMessage("reply", reply);
    msg->post();
}

此處發出了一個kWhatConnect的訊息給自己,在onMessageReceived中收到後呼叫
void ARTSPConnection::onConnect(const sp<AMessage> &msg) {
    ++mConnectionID;

    if (mState != DISCONNECTED) {
        close(mSocket);
        mSocket = -1;

        flushPendingRequests();
    }

    mState = CONNECTING;

    AString url;
    CHECK(msg->findString("url", &url));

    sp<AMessage> reply;
    CHECK(msg->findMessage("reply", &reply));

    AString host, path;
    unsigned port;
    if (!ParseURL(url.c_str(), &host, &port, &path, &mUser, &mPass)
            || (mUser.size() > 0 && mPass.size() == 0)) {
        // If we have a user name but no password we have to give up
        // right here, since we currently have no way of asking the user
        // for this information.

        LOGE("Malformed rtsp url %s", url.c_str());

        reply->setInt32("result", ERROR_MALFORMED);
        reply->post();

        mState = DISCONNECTED;
        return;
    }

    if (mUser.size() > 0) {
        LOGV("user = '%s', pass = '%s'", mUser.c_str(), mPass.c_str());
    }

    struct hostent *ent = gethostbyname(host.c_str());
    if (ent == NULL) {
        LOGE("Unknown host %s", host.c_str());

        reply->setInt32("result", -ENOENT);
        reply->post();

        mState = DISCONNECTED;
        return;
    }

    mSocket = socket(AF_INET, SOCK_STREAM, 0);

    MakeSocketBlocking(mSocket, false);

    struct sockaddr_in remote;
    memset(remote.sin_zero, 0, sizeof(remote.sin_zero));
    remote.sin_family = AF_INET;
    remote.sin_addr.s_addr = *(in_addr_t *)ent->h_addr;
    remote.sin_port = htons(port);

    int err = ::connect(
            mSocket, (const struct sockaddr *)&remote, sizeof(remote));

    reply->setInt32("server-ip", ntohl(remote.sin_addr.s_addr));

    if (err < 0) {
        if (errno == EINPROGRESS) {
            sp<AMessage> msg = new AMessage(kWhatCompleteConnection, id());
            msg->setMessage("reply", reply);
            msg->setInt32("connection-id", mConnectionID);
            msg->post();
            return;
        }

        reply->setInt32("result", -errno);
        mState = DISCONNECTED;

        close(mSocket);
        mSocket = -1;
    } else {
        reply->setInt32("result", OK);
        mState = CONNECTED;
        mNextCSeq = 1;

        postReceiveReponseEvent();
    }

    reply->post();
}


在connect中會解析url得到ip port等,然後通過socket連線到此ip

 int err = ::connect(
            mSocket, (const struct sockaddr *)&remote, sizeof(remote));

連線成功以後ARTSPConnection中的狀態變為 mState = CONNECTED;

並且呼叫postReceiveReponseEvent函式:

void ARTSPConnection::postReceiveReponseEvent() {
    if (mReceiveResponseEventPending) {
        return;
    }

    sp<AMessage> msg = new AMessage(kWhatReceiveResponse, id());
    msg->post();

    mReceiveResponseEventPending = true;
}
傳送kWhatReceiveResponse訊息給自己

最終在

void ARTSPConnection::onReceiveResponse() {
    mReceiveResponseEventPending = false;

    if (mState != CONNECTED) {
        return;
    }

    struct timeval tv;
    tv.tv_sec = 0;
    tv.tv_usec = kSelectTimeoutUs;

    fd_set rs;
    FD_ZERO(&rs);
    FD_SET(mSocket, &rs);

    int res = select(mSocket + 1, &rs, NULL, NULL, &tv);
    CHECK_GE(res, 0);

    if (res == 1) {
        MakeSocketBlocking(mSocket, true);

        bool success = receiveRTSPReponse();

        MakeSocketBlocking(mSocket, false);

        if (!success) {
            // Something horrible, irreparable has happened.
            flushPendingRequests();
            return;
        }
    }

    postReceiveReponseEvent();
}

中通過select監聽本地的mSocket,看socket中是否有可讀資料,也就是看remote是否有傳送資料過來。

當有資料過來時res == 1

就通過receiveRTSPReponse函式來接受傳送過來的資料

大家可能有印象,這個函式在之前有遇到請查詢kWhatObserveBinaryData就知道了。

postReceiveReponseEvent之後會呼叫 reply->post();注意這個就是前面MyHandler中傳送出來的AMessage('conn', id());

此時程式碼將回到MyHandler的onMessageReceived case 'conn':中來。

這裡有 sp<AMessage> reply = new AMessage('desc', id());
                    mConn->sendRequest(request.c_str(), reply);

原理同之前的一樣也是傳送請求並且有reply

有接觸過SIP的知道有SDP這個協議,這個是在會話建立以前雙方用來協商媒體資訊的。

不重複解釋,直接看case 'desc': 這裡通過與對方的協商,回覆的response code ,大家都知道如果http的話回覆200則代表訪問成功,這裡也是一樣的
我們看200裡面的操作:

                    if (response->mStatusCode != 200) {
                        result = UNKNOWN_ERROR;
                    } else {
                        mSessionDesc = new ASessionDescription;

                        mSessionDesc->setTo(
                                response->mContent->data(),
                                response->mContent->size());

                        if (!mSessionDesc->isValid()) {
                            LOGE("Failed to parse session description.");
                            result = ERROR_MALFORMED;
                        } else {
                            ssize_t i = response->mHeaders.indexOfKey("content-base");
                            if (i >= 0) {
                                mBaseURL = response->mHeaders.valueAt(i);
                            } else {
                                i = response->mHeaders.indexOfKey("content-location");
                                if (i >= 0) {
                                    mBaseURL = response->mHeaders.valueAt(i);
                                } else {
                                    mBaseURL = mSessionURL;
                                }
                            }

                            if (!mBaseURL.startsWith("rtsp://")) {
                                // Some misbehaving servers specify a relative
                                // URL in one of the locations above, combine
                                // it with the absolute session URL to get
                                // something usable...

                                LOGW("Server specified a non-absolute base URL"
                                     ", combining it with the session URL to "
                                     "get something usable...");

                                AString tmp;
                                CHECK(MakeURL(
                                            mSessionURL.c_str(),
                                            mBaseURL.c_str(),
                                            &tmp));

                                mBaseURL = tmp;
                            }

                            CHECK_GT(mSessionDesc->countTracks(), 1u);
                            setupTrack(1);
                        }
                    }

首先一點是將協商後的sdp資訊儲存起來放到ASessionDescription裡面。

然後再看setupTrack(1),這裡還不確定有幾個track,只是先設定track 1:

 void setupTrack(size_t index) {
        sp<APacketSource> source =
            new APacketSource(mSessionDesc, index);

        if (source->initCheck() != OK) {
            LOGW("Unsupported format. Ignoring track #%d.", index);

            sp<AMessage> reply = new AMessage('setu', id());
            reply->setSize("index", index);
            reply->setInt32("result", ERROR_UNSUPPORTED);
            reply->post();
            return;
        }

        AString url;
        CHECK(mSessionDesc->findAttribute(index, "a=control", &url));

        AString trackURL;
        CHECK(MakeURL(mBaseURL.c_str(), url.c_str(), &trackURL));

        mTracks.push(TrackInfo());
        TrackInfo *info = &mTracks.editItemAt(mTracks.size() - 1);
        info->mURL = trackURL;
        info->mPacketSource = source;
        info->mUsingInterleavedTCP = false;
        info->mFirstSeqNumInSegment = 0;
        info->mNewSegment = true;

        LOGV("track #%d URL=%s", mTracks.size(), trackURL.c_str());

        AString request = "SETUP ";
        request.append(trackURL);
        request.append(" RTSP/1.0\r\n");

        if (mTryTCPInterleaving) {
            size_t interleaveIndex = 2 * (mTracks.size() - 1);
            info->mUsingInterleavedTCP = true;
            info->mRTPSocket = interleaveIndex;
            info->mRTCPSocket = interleaveIndex + 1;

            request.append("Transport: RTP/AVP/TCP;interleaved=");
            request.append(interleaveIndex);
            request.append("-");
            request.append(interleaveIndex + 1);
        } else {
            unsigned rtpPort;
            ARTPConnection::MakePortPair(
                    &info->mRTPSocket, &info->mRTCPSocket, &rtpPort);

            request.append("Transport: RTP/AVP/UDP;unicast;client_port=");
            request.append(rtpPort);
            request.append("-");
            request.append(rtpPort + 1);
        }

        request.append("\r\n");

        if (index > 1) {
            request.append("Session: ");
            request.append(mSessionID);
            request.append("\r\n");
        }

        request.append("\r\n");

        sp<AMessage> reply = new AMessage('setu', id());
        reply->setSize("index", index);
        reply->setSize("track-index", mTracks.size() - 1);
        mConn->sendRequest(request.c_str(), reply);
    }

根據mSessionDesc建立了一個APacketSource物件,setupTrack唯一的引數是index,此引數標記了是第幾個track,這裡的track實際相當與一個媒體源,音訊流/視訊流

當track 1設定完成後又傳送setu給自己然後設定第二個track ......

設定完的track會mRTPConn->addStream(
                                track->mRTPSocket, track->mRTCPSocket,
                                mSessionDesc, index,
                                notify, track->mUsingInterleavedTCP);

將track的流資訊加入到ARTPConnection中,ARTPConnection相當於是對外的介面

注意這裡傳進去的sp<AMessage> notify = new AMessage('accu', id());
                        notify->setSize("track-index", trackIndex);

這個也是一個回覆的訊息。

void ARTPConnection::addStream(
        int rtpSocket, int rtcpSocket,
        const sp<ASessionDescription> &sessionDesc,
        size_t index,
        const sp<AMessage> ¬ify,
        bool injected) {
    sp<AMessage> msg = new AMessage(kWhatAddStream, id());
    msg->setInt32("rtp-socket", rtpSocket);
    msg->setInt32("rtcp-socket", rtcpSocket);
    msg->setObject("session-desc", sessionDesc);
    msg->setSize("index", index);
    msg->setMessage("notify", notify);
    msg->setInt32("injected", injected);
    msg->post();
}

再看ARTPConnection的case:kWhatAddStream 
void ARTPConnection::onAddStream(const sp<AMessage> &msg) {
    mStreams.push_back(StreamInfo());
    StreamInfo *info = &*--mStreams.end();

    int32_t s;
    CHECK(msg->findInt32("rtp-socket", &s));
    info->mRTPSocket = s;
    CHECK(msg->findInt32("rtcp-socket", &s));
    info->mRTCPSocket = s;

    int32_t injected;
    CHECK(msg->findInt32("injected", &injected));

    info->mIsInjected = injected;

    sp<RefBase> obj;
    CHECK(msg->findObject("session-desc", &obj));
    info->mSessionDesc = static_cast<ASessionDescription *>(obj.get());

    CHECK(msg->findSize("index", &info->mIndex));
    CHECK(msg->findMessage("notify", &info->mNotifyMsg));

    info->mNumRTCPPacketsReceived = 0;
    info->mNumRTPPacketsReceived = 0;
    memset(&info->mRemoteRTCPAddr, 0, sizeof(info->mRemoteRTCPAddr));

    if (!injected) {
        postPollEvent();
    }
}

此處在List<StreamInfo> mStreams;Stream列表中加入了一條stream並且進行了初始化操作(rtp socket   rtcp socket)

以及對info->mNotifyMsg的賦值(記住這個是AMessage('accu', id());)

在最後 呼叫了postPollEvent函式

這個函式實際是等待對方發來多媒體資料的:

void ARTPConnection::onPollStreams() {
    mPollEventPending = false;

    if (mStreams.empty()) {
        return;
    }

    struct timeval tv;
    tv.tv_sec = 0;
    tv.tv_usec = kSelectTimeoutUs;

    fd_set rs;
    FD_ZERO(&rs);

    int maxSocket = -1;
    for (List<StreamInfo>::iterator it = mStreams.begin();
         it != mStreams.end(); ++it) {
        if ((*it).mIsInjected) {
            continue;
        }

        FD_SET(it->mRTPSocket, &rs);
        FD_SET(it->mRTCPSocket, &rs);

        if (it->mRTPSocket > maxSocket) {
            maxSocket = it->mRTPSocket;
        }
        if (it->mRTCPSocket > maxSocket) {
            maxSocket = it->mRTCPSocket;
        }
    }

    if (maxSocket == -1) {
        return;
    }

    int res = select(maxSocket + 1, &rs, NULL, NULL, &tv);
    CHECK_GE(res, 0);

    if (res > 0) {
        for (List<StreamInfo>::iterator it = mStreams.begin();
             it != mStreams.end(); ++it) {
            if ((*it).mIsInjected) {
                continue;
            }

            if (FD_ISSET(it->mRTPSocket, &rs)) {
                receive(&*it, true);
            }
            if (FD_ISSET(it->mRTCPSocket, &rs)) {
                receive(&*it, false);
            }
        }
    }

    postPollEvent();

    int64_t nowUs = ALooper::GetNowUs();
    if (mLastReceiverReportTimeUs <= 0
            || mLastReceiverReportTimeUs + 5000000ll <= nowUs) {
        sp<ABuffer> buffer = new ABuffer(kMaxUDPSize);
        for (List<StreamInfo>::iterator it = mStreams.begin();
             it != mStreams.end(); ++it) {
            StreamInfo *s = &*it;

            if (s->mIsInjected) {
                continue;
            }

            if (s->mNumRTCPPacketsReceived == 0) {
                // We have never received any RTCP packets on this stream,
                // we don't even know where to send a report.
                continue;
            }

            buffer->setRange(0, 0);

            for (size_t i = 0; i < s->mSources.size(); ++i) {
                sp<ARTPSource> source = s->mSources.valueAt(i);

                source->addReceiverReport(buffer);

                if (mFlags & kRegularlyRequestFIR) {
                    source->addFIR(buffer);
                }
            }

            if (buffer->size() > 0) {
                LOGV("Sending RR...");

                ssize_t n = sendto(
                        s->mRTCPSocket, buffer->data(), buffer->size(), 0,
                        (const struct sockaddr *)&s->mRemoteRTCPAddr,
                        sizeof(s->mRemoteRTCPAddr));
                CHECK_EQ(n, (ssize_t)buffer->size());

                mLastReceiverReportTimeUs = nowUs;
            }
        }
    }
}

此函式十分關鍵,在int res = select(maxSocket + 1, &rs, NULL, NULL, &tv);中監聽了rtp socket和rtcp socket ,看這兩個套接字中是否有資料可讀,當發現有資料可讀時就呼叫:
status_t ARTPConnection::receive(StreamInfo *s, bool receiveRTP) {
    LOGV("receiving %s", receiveRTP ? "RTP" : "RTCP");

    CHECK(!s->mIsInjected);

    sp<ABuffer> buffer = new ABuffer(65536);

    socklen_t remoteAddrLen =
        (!receiveRTP && s->mNumRTCPPacketsReceived == 0)
            ? sizeof(s->mRemoteRTCPAddr) : 0;

    ssize_t nbytes = recvfrom(
            receiveRTP ? s->mRTPSocket : s->mRTCPSocket,
            buffer->data(),
            buffer->capacity(),
            0,
            remoteAddrLen > 0 ? (struct sockaddr *)&s->mRemoteRTCPAddr : NULL,
            remoteAddrLen > 0 ? &remoteAddrLen : NULL);

    if (nbytes < 0) {
        return -1;
    }

    buffer->setRange(0, nbytes);

    // LOGI("received %d bytes.", buffer->size());

    status_t err;
    if (receiveRTP) {
        err = parseRTP(s, buffer);
    } else {
        err = parseRTCP(s, buffer);
    }

    return err;
}

來進行接受接受到RTP資料後通過parseRTP來解析,在解析的第一句話就是

if (s->mNumRTPPacketsReceived++ == 0) {
        sp<AMessage> notify = s->mNotifyMsg->dup();
        notify->setInt32("first-rtp", true);
        notify->post();
    }

這就是收到的第一個rtp包,然後就觸發了前面的AMessage('accu', id());)訊息,進而回到

MyHandler的 case 'accu':

而在這裡面有這麼一句話十分關鍵:

if (mFirstAccessUnit) {
                    mDoneMsg->setInt32("result", OK);
                    mDoneMsg->post();
                    mDoneMsg = NULL;


                    mFirstAccessUnit = false;
                    mFirstAccessUnitNTP = ntpTime;
                }

大家或許早就忘記這個mDoneMsg是什麼東西,你可以前面的ARTSPController::connect函式中找到答案,沒錯!它就是kWhatConnectDone

 case kWhatConnectDone:
        {
            Mutex::Autolock autoLock(mLock);

            CHECK(msg->findInt32("result", &mConnectionResult));
            mState = (mConnectionResult == OK) ? CONNECTED : DISCONNECTED;

            mCondition.signal();
            break;
        }

看到沒有 mCondition.signal(); 傳送喚醒訊息,這時候 ARTSPController::connect 才能真正返回。

前面看似很簡單的一個函式mRTSPController->connect(mUri.string());竟然經歷了這麼多才回來!!!!!!

總結一下:

1.ARTSPController主要控制流媒體的連線,斷開和快進,但是真正做事的是它的成員變數MyHandler

2.MyHandler類裡面的ARTSPConnection負責流媒體的收發操作。

3.ARTSPConnection首先解析URL,得到主機的地址和埠號,然後建立socket 連線到遠端的主機,並且建立監聽機制等待遠端主機發送過來的資料

先通過sdp的形式和遠端主機協商媒體型別,然後根據型別建立不同的StreamInfo包括音訊和視訊的(並且每一個StreamInfo都有一個rtp socket 和一個rtcp socket)

並且監聽這些socket,當有第一個rtp資料傳過來時,我們就認為socket連線建立並返回。

前面這些都是RTSP裡面所完成的,RTSP也屬於stagefright框架中的一部分,所以它的一些類都是繼承的stagefright中的基本類

struct APacketSource : public MediaSource 

struct ARTSPController : public MediaExtractor 

ARTSPController 在呼叫finishSetDataSource_l後被設定為資料來源。

status_t AwesomePlayer::setDataSource_l(const sp<MediaExtractor> &extractor) {
    // Attempt to approximate overall stream bitrate by summing all
    // tracks' individual bitrates, if not all of them advertise bitrate,
    // we have to fail.

    int64_t totalBitRate = 0;

    for (size_t i = 0; i < extractor->countTracks(); ++i) {
        sp<MetaData> meta = extractor->getTrackMetaData(i);

        int32_t bitrate;
        if (!meta->findInt32(kKeyBitRate, &bitrate)) {
            totalBitRate = -1;
            break;
        }

        totalBitRate += bitrate;
    }

    mBitrate = totalBitRate;

    LOGV("mBitrate = %lld bits/sec", mBitrate);

    bool haveAudio = false;
    bool haveVideo = false;
    for (size_t i = 0; i < extractor->countTracks(); ++i) {
        sp<MetaData> meta = extractor->getTrackMetaData(i);

        const char *mime;
        CHECK(meta->findCString(kKeyMIMEType, &mime));

        if (!haveVideo && !strncasecmp(mime, "video/", 6)) {
            setVideoSource(extractor->getTrack(i));
            haveVideo = true;
        } else if (!haveAudio && !strncasecmp(mime, "audio/", 6)) {
            setAudioSource(extractor->getTrack(i));
            haveAudio = true;

            if (!strcasecmp(mime, MEDIA_MIMETYPE_AUDIO_VORBIS)) {
                // Only do this for vorbis audio, none of the other audio
                // formats even support this ringtone specific hack and
                // retrieving the metadata on some extractors may turn out
                // to be very expensive.
                sp<MetaData> fileMeta = extractor->getMetaData();
                int32_t loop;
                if (fileMeta != NULL
                        && fileMeta->findInt32(kKeyAutoLoop, &loop) && loop != 0) {
                    mFlags |= AUTO_LOOPING;
                }
            }
        }

        if (haveAudio && haveVideo) {
            break;
        }
    }

    if (!haveAudio && !haveVideo) {
        return UNKNOWN_ERROR;
    }

    mExtractorFlags = extractor->flags();

    return OK;
}
並且將音訊視訊流都提取出來,然後
void AwesomePlayer::onPrepareAsyncEvent() {
    Mutex::Autolock autoLock(mLock);

    if (mFlags & PREPARE_CANCELLED) {
        LOGI("prepare was cancelled before doing anything");
        abortPrepare(UNKNOWN_ERROR);
        return;
    }

    if (mUri.size() > 0) {
        status_t err = finishSetDataSource_l();

        if (err != OK) {
            abortPrepare(err);
            return;
        }
    }

    if (mVideoTrack != NULL && mVideoSource == NULL) {
        status_t err = initVideoDecoder();

        if (err != OK) {
            abortPrepare(err);
            return;
        }
    }

    if (mAudioTrack != NULL && mAudioSource == NULL) {
        status_t err = initAudioDecoder();

        if (err != OK) {
            abortPrepare(err);
            return;
        }
    }

    if (mCachedSource != NULL || mRTSPController != NULL) {
        postBufferingEvent_l();
    } else {
        finishAsyncPrepare_l();
    }
}

初始化音視訊解碼器,這樣就完成了整個prepare的動作。

後面使用者呼叫play實際就是開始接收資料,解碼,然後播放渲染。