1. 程式人生 > >Android系統之Binder原始碼情景分析

Android系統之Binder原始碼情景分析

寫在前面:看過很多大牛寫的Binder詳解,因為講得太過晦澀難懂,所以對於新手好像不太友好,為了讓新手對於Binder有一個大概的認識,故準備了半個月寫了這篇部落格,部落格的大概流程應該是正確的,希望看過的新手能夠有一些收穫。本文主要講解了三個部分:ServiceManager 啟動流程、ServiceManager 註冊服務過程、ServiceManager 獲取服務過程

Binder原理

1. ServiceManager的啟動流程

system\core\roodir\init.rc:
service servicemanager /system/bin/servicemanager       //可知孵化器的目錄為servicemanager
class core user system group system critical onrestart restart healthd onrestart restart zygote onrestart restart media onrestart restart surfaceflinger onrestart restart drm

分析Android啟動流程可知,Android啟動時會解析init.rc,servicemanager 服務的孵化器的目錄為/system/bin/servicemanager,在此目錄下有service_manager.c、binder.c

frameworks\native\cmds\servicemanager\Service_manager.c:
int main(int argc, char **argv)
{
        struct binder_state *bs
    bs = binder_open(128*1024);         //1. 開啟Binder驅動,建立128K = 128*1024記憶體對映
    if (binder_become_context_manager(bs)) {        //2. 設定自己(ServiceManager)為Binder的大管家
        ALOGE("cannot become context manager (%s)\n"
, strerror(errno)); return -1; } ... svcmgr_handle = BINDER_SERVICE_MANAGER; binder_loop(bs, svcmgr_handler); //3. 開啟for迴圈,充當Server的角色,等待Client連線 return 0; }

分析binder_state:`

struct binder_state
{
    int fd;     //表示開啟的/dev/binder檔案控制代碼
    void *mapped;       //把裝置檔案/dev/binder對映到程序空間的起始地址
    size_t mapsize;     //記憶體對映空間的大小
};

分析BINDER_SERVICE_MANAGER:`

#define BINDER_SERVICE_MANAGER  0U      //表示Service Manager的控制代碼為0

1.1 分析binder_open(128*1024)

frameworks/native/cmds/servicemanager/Binder.c:
struct binder_state *binder_open(size_t mapsize)
{
    struct binder_state *bs;
    struct binder_version vers;
    bs = malloc(sizeof(*bs));
    if (!bs) {
        errno = ENOMEM;
        return NULL;
    }
    bs->fd = open("/dev/binder", O_RDWR);               //呼叫Binder驅動註冊的file_operation結構體的open、ioctl、mmap函式
    if (bs->fd < 0) {                                                       //a. binder_state.fd儲存開啟的/dev/binder檔案控制代碼
        goto fail_open;
    }
    if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
        (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
        goto fail_open;
    }
    bs->mapsize = mapsize;          //b. binder_state.mapsize儲存記憶體對映空間的大小128K = 128*1024
    bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);        //c. binder_state.mapped儲存裝置檔案/dev/binder對映到程序空間的起始地址
    if (bs->mapped == MAP_FAILED) {
        fprintf(stderr,"binder: cannot map device (%s)\n",
                strerror(errno));
        goto fail_map;
    }
    return bs;
fail_map:
    close(bs->fd);
fail_open:
    free(bs);
    return NULL;
}

執行open(“/dev/binder”, O_RDWR);時從使用者態進入核心態,因此會執行:

drivers/staging/android/binder.c
static int binder_open(struct inode *nodp, struct file *filp)
{
    struct binder_proc *proc;
    proc = kzalloc(sizeof(*proc), GFP_KERNEL);      //a. 建立Service_manager程序對應的binder_proc,儲存Service_manager程序的資訊
    if (proc == NULL)
        return -ENOMEM;
    get_task_struct(current);
    proc->tsk = current;
    INIT_LIST_HEAD(&proc->todo);
    init_waitqueue_head(&proc->wait);
    proc->default_priority = task_nice(current);
    binder_lock(__func__);
    binder_stats_created(BINDER_STAT_PROC);
    hlist_add_head(&proc->proc_node, &binder_procs);        //binder_procs是一個全域性變數,hlist_add_head是將proc->proc_node(proc->proc_node是一個hlist_node連結串列)加入binder_procs的list中
    proc->pid = current->group_leader->pid;
    INIT_LIST_HEAD(&proc->delivered_death);
    filp->private_data = proc;          //將binder_proc儲存在開啟檔案file的私有資料成員變數private_data中
    binder_unlock(__func__);
    return 0;
}

在此函式中建立Service_manager程序對應的binder_proc,儲存Service_manager程序的資訊,並將binder_proc儲存在開啟檔案file的私有資料成員變數private_data中

//分析下面這個函式可以得出:函式功能是將proc->proc_node放入binder_procs連結串列的頭部,注意是從右向左,最開始插入的binder_proc在binder_procs連結串列的最右邊
static inline void hlist_add_head(struct hlist_node *n, struct hlist_head *h)
{
    struct hlist_node *first = h->first;
    n->next = first;
    if (first)
        first->pprev = &n->next;
    h->first = n;
    n->pprev = &h->first;
}

分析binder_open(128*1024)可知:
a. 建立binder_state結構體儲存/dev/binder檔案控制代碼fd、記憶體對映的起始地址和大小
b. 建立binder_procs連結串列,將儲存Service_manager程序資訊的binder_proc對應的binder_proc->proc_node加入binder_procs的list中(proc->proc_node是一個hlist_node連結串列)

1.2 分析binder_become_context_manager(bs)

frameworks\native\cmds\servicemanager\Binder.c:
int binder_become_context_manager(struct binder_state *bs)
{
    return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);            //傳入引數BINDER_SET_CONTEXT_MGR
}

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)        //cmd = BINDER_SET_CONTEXT_MGR
{
    int ret;
    struct binder_proc *proc = filp->private_data;      //獲得binder_proc,binder_proc對應Service_manager程序 --- 從開啟檔案file的私有資料成員變數private_data中獲取binder_proc
    struct binder_thread *thread;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;
    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret)
        return ret;
    binder_lock(__func__);
    thread = binder_get_thread(proc);       //獲得Service_manager執行緒的資訊binder_thread
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }
    switch (cmd) {
    ...
    case BINDER_SET_CONTEXT_MGR:
        if (binder_context_mgr_node != NULL) {      //由binder_context_mgr_node = binder_new_node(proc, NULL, NULL);可知:binder_context_mgr_node為ServiceManager對應的binder_node
            ...
        }
        ret = security_binder_set_context_mgr(proc->tsk);
        if (ret < 0)
            goto err;
        if (binder_context_mgr_uid != -1) {             //binder_context_mgr_uid表示ServiceManager程序的uid
            if (binder_context_mgr_uid != current->cred->euid) {
                ...
            }
        } else {
        binder_context_mgr_uid = current->cred->euid;
        binder_context_mgr_node = binder_new_node(proc, NULL, NULL);        //binder_context_mgr_node為ServiceManager對應的binder_node,且binder_node.proc對應Service_manager程序
        binder_context_mgr_node->local_weak_refs++;
        binder_context_mgr_node->local_strong_refs++;
        binder_context_mgr_node->has_strong_ref = 1;
        binder_context_mgr_node->has_weak_ref = 1;
        break;
    }
    ret = 0;
    ...
    return ret;
}

1.2.1 分析thread = binder_get_thread(proc);

static struct binder_thread *binder_get_thread(struct binder_proc *proc)        //proc對應Service_manager程序
{
    struct binder_thread *thread = NULL;
    struct rb_node *parent = NULL;
    struct rb_node **p = &proc->threads.rb_node;
    /*儘量從threads樹中查詢和current執行緒匹配的binder_thread節點*/
    while (*p) {
        parent = *p;
        thread = rb_entry(parent, struct binder_thread, rb_node);
        if (current->pid < thread->pid)
            p = &(*p)->rb_left;
        else if (current->pid > thread->pid)
            p = &(*p)->rb_right;
        else
            break;
    }
    /*“找不到就建立”一個binder_thread節點*/
    if (*p == NULL) {       //第一次執行時,p為NULL,下一次執行時會進入while
        thread = kzalloc(sizeof(*thread), GFP_KERNEL);      //b. 建立Service_manager程序對應的binder_thread
        if (thread == NULL)
            return NULL;
        binder_stats_created(BINDER_STAT_THREAD);
        thread->proc = proc;                                                            //將Service_manager程序的binder_proc儲存到binder_thread.proc
        thread->pid = current->pid;                                             //將Service_manager程序的PID儲存到binder_thread.pid
        init_waitqueue_head(&thread->wait);
        INIT_LIST_HEAD(&thread->todo);
        rb_link_node(&thread->rb_node, parent, p);              //將binder_thread儲存到紅黑樹中
        rb_insert_color(&thread->rb_node, &proc->threads);
        thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;
        thread->return_error = BR_OK;
        thread->return_error2 = BR_OK;
    }
    return thread;
}

此函式為獲得proc對應程序下的所有執行緒中和當前執行緒pid相等的binder_thread

1.2.2 分析binder_context_mgr_node = binder_new_node(proc, NULL, NULL);

static struct binder_thread *binder_get_thread(struct binder_proc *proc)        //proc對應Service_manager程序
{
    struct binder_thread *thread = NULL;
    struct rb_node *parent = NULL;
    struct rb_node **p = &proc->threads.rb_node;
    /*儘量從threads樹中查詢和current執行緒匹配的binder_thread節點*/
    while (*p) {
        parent = *p;
        thread = rb_entry(parent, struct binder_thread, rb_node);
        if (current->pid < thread->pid)
            p = &(*p)->rb_left;
        else if (current->pid > thread->pid)
            p = &(*p)->rb_right;
        else
            break;
    }
    /*“找不到就建立”一個binder_thread節點*/
    if (*p == NULL) {       //第一次執行時,p為NULL,下一次執行時會進入while
        thread = kzalloc(sizeof(*thread), GFP_KERNEL);      //b. 建立Service_manager程序對應的binder_thread
        if (thread == NULL)
            return NULL;
        binder_stats_created(BINDER_STAT_THREAD);
        thread->proc = proc;                                                            //將Service_manager程序的binder_proc儲存到binder_thread.proc
        thread->pid = current->pid;                                             //將Service_manager程序的PID儲存到binder_thread.pid
        init_waitqueue_head(&thread->wait);
        INIT_LIST_HEAD(&thread->todo);
        rb_link_node(&thread->rb_node, parent, p);              //將binder_thread儲存到紅黑樹中
        rb_insert_color(&thread->rb_node, &proc->threads);
        thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;
        thread->return_error = BR_OK;
        thread->return_error2 = BR_OK;
    }
    return thread;
}

1.2.2 分析binder_context_mgr_node = binder_new_node(proc, NULL, NULL);

static struct binder_node *binder_new_node(struct binder_proc *proc,
                       void __user *ptr,
                       void __user *cookie)
{
    struct rb_node **p = &proc->nodes.rb_node;
    struct rb_node *parent = NULL;
    struct binder_node *node;
    while (*p) {
        parent = *p;
        node = rb_entry(parent, struct binder_node, rb_node);
        if (ptr < node->ptr)
            p = &(*p)->rb_left;
        else if (ptr > node->ptr)
            p = &(*p)->rb_right;
        else
            return NULL;
    }
    node = kzalloc(sizeof(*node), GFP_KERNEL);      //c. 建立Service_manager程序對應的binder_node
    if (node == NULL)
        return NULL;
    binder_stats_created(BINDER_STAT_NODE);
    rb_link_node(&node->rb_node, parent, p);            //將binder_node儲存到紅黑樹中
    rb_insert_color(&node->rb_node, &proc->nodes);
    node->debug_id = ++binder_last_id;
    node->proc = proc;                                          //將binder_proc儲存到binder_node.proc
    node->ptr = ptr;
    node->cookie = cookie;
    node->work.type = BINDER_WORK_NODE;
    INIT_LIST_HEAD(&node->work.entry);
    INIT_LIST_HEAD(&node->async_todo);
    return node;
}

分析binder_become_context_manager(bs)可知:
a. 建立ServiceManager執行緒的binder_thread,binder_thread.proc儲存ServiceManager程序對應的binder_proc,binder_thread.pid儲存當前程序ServiceManager的PID
b. 建立ServiceManager程序的binder_node,binder_node.proc儲存binder_proc
c. 把ServiceManager程序對應的binder_proc儲存到全域性變數filp->private_data中

1.3 分析binder_loop(bs, svcmgr_handler)

frameworks\native\cmds\servicemanager\Binder.c:
void binder_loop(struct binder_state *bs, binder_handler func)      //開啟for迴圈,充當Server的角色,等待Client連線
{
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];
    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    readbuf[0] = BC_ENTER_LOOPER;                                           //readbuf[0] = BC_ENTER_LOOPER
    binder_write(bs, readbuf, sizeof(uint32_t));
    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;      //bwr.read_buffer = BC_ENTER_LOOPER
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);       //bs->fd記錄/dev/binder檔案控制代碼,因此呼叫binder驅動的ioctl函式,傳入引數BINDER_WRITE_READ,bwr.read_buffer = BC_ENTER_LOOPER,bwr.write_buffer = 0
        ...
        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
    }
}

1.3.1 分析binder_write(bs, readbuf, sizeof(uint32_t));

int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;
    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;        //bwr.write_buffer = data = BC_ENTER_LOOPER
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    return res;
}

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)        //cmd = BINDER_WRITE_READ,bwr.read_size = 0,bwr.write_size = len
{
    int ret;
    struct binder_proc *proc = filp->private_data;      //獲得Service_manager程序的binder_proc,從開啟檔案file的私有資料成員變數private_data中獲取binder_proc
    struct binder_thread *thread;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;
    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret)
        return ret;
    binder_lock(__func__);
    thread = binder_get_thread(proc);       //獲得proc對應程序(Service_manager程序)下的所有執行緒中和當前執行緒pid相等的binder_thread
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }
    switch (cmd) {
    ...
    case BINDER_WRITE_READ: {
        struct binder_write_read bwr;
        if (size != sizeof(struct binder_write_read)) {
            ret = -EINVAL;
            goto err;
        }
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {      //把使用者傳遞進來的引數轉換成binder_write_read結構體,並儲存在本地變數bwr中,bwr.write_buffer = BC_ENTER_LOOPER
            ret = -EFAULT;                                                                                                                                                                                                                          bwr.read_buffer  = 0
            goto err;
        }
        if (bwr.write_size > 0) {       //bwr.write_size = len
            ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);      //bwr.write_buffer = BC_ENTER_LOOPER,bwr.write_consumed = 0
            if (ret < 0) {
                bwr.read_consumed = 0;
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        if (bwr.read_size > 0) {        //bwr.read_size = 0
            ...
        }
        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {        //將bwr返回到使用者空間
            ret = -EFAULT;
            goto err;
        }
        break;
    }
    ret = 0;
    ...
    return ret;
}

int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,         //引數binder_proc、binder_thread、binder_write_read
            void __user *buffer, int size, signed long *consumed)     //buffer = bwr.write_buffer = BC_ENTER_LOOPER
{
    uint32_t cmd;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;
    while (ptr < end && thread->return_error == BR_OK) {
        if (get_user(cmd, (uint32_t __user *)ptr))                  //cmd = BC_ENTER_LOOPER
            return -EFAULT;
        ptr += sizeof(uint32_t);
        if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
            binder_stats.bc[_IOC_NR(cmd)]++;
            proc->stats.bc[_IOC_NR(cmd)]++;
            thread->stats.bc[_IOC_NR(cmd)]++;
        }
        switch (cmd) {      //cmd = BC_ENTER_LOOPER
        case BC_ENTER_LOOPER:
            ...
            thread->looper |= BINDER_LOOPER_STATE_ENTERED;      //binder_thread.looper值變為BINDER_LOOPER_STATE_ENTERED,表明當前執行緒ServiceManager已經進入迴圈狀態
            break;
    }
    return 0;
}

1.3.2 分析res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)        //cmd = BINDER_WRITE_READ
{
    int ret;
    struct binder_proc *proc = filp->private_data;      //獲得Service_manager程序的binder_proc,從開啟檔案file的私有資料成員變數private_data中獲取binder_proc
    struct binder_thread *thread;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;
    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret)
        return ret;
    binder_lock(__func__);
    thread = binder_get_thread(proc);       //獲得proc對應程序(Service_manager程序)下的所有執行緒中和當前執行緒pid相等的binder_thread
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }
    switch (cmd) {
    ...
    case BINDER_WRITE_READ: {
        struct binder_write_read bwr;
        if (size != sizeof(struct binder_write_read)) {
            ret = -EINVAL;
            goto err;
        }
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {      //把使用者傳遞進來的引數轉換成binder_write_read結構體,並儲存在本地變數bwr中,bwr.read_buffer  = BC_ENTER_LOOPER
            ret = -EFAULT;                                                                                                                                                                                                                          bwr.write_buffer = BC_ENTER_LOOPER
            goto err;
        }
        if (bwr.write_size > 0) {       //由binder_loop函式可知bwr.write_buffer = 0
            ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
            if (ret < 0) {
                bwr.read_consumed = 0;
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        if (bwr.read_size > 0) {        //由binder_loop函式可知bwr.read_buffer = BC_ENTER_LOOPER
            /*讀取binder_thread->todo的事物,並處理,執行完後bwr.read_buffer = BR_NOOP*/
            ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);      //proc和thread分別發起傳輸動作的程序和執行緒
            if (!list_empty(&proc->todo))
                wake_up_interruptible(&proc->wait);
            if (ret < 0) {
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
            ret = -EFAULT;
            goto err;
        }
        break;
    }
    ret = 0;
    ...
    return ret;
}

1.3.2.1 分析binder_thread_write

int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
            void __user *buffer, int size, signed long *consumed)
{
    ...
    return 0;
}

1.3.2.2 分析binder_thread_read

static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  void  __user *buffer, int size,                       //buffer = bwr.read_buffer = BC_ENTER_LOOPER,consumed = 0
                  signed long *consumed, int non_block)
{
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;
    int ret = 0;
    int wait_for_proc_work;
    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))      //把BR_NOOP寫回到使用者傳進來的緩衝區ptr = *buffer + *consumed = bwr.read_buffer + bwr.read_consumed = bwr.read_buffer,即ptr = bwr.read_buffer = BR_NOOP
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }
    ...
    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;
        if (!list_empty(&thread->todo))
            w = list_first_entry(&thread->todo, struct binder_work, entry);         //從thread->todo佇列中取出待處理的事項
        else if (!list_empty(&proc->todo) && wait_for_proc_work)
            w = list_first_entry(&proc->todo, struct binder_work, entry);
        else {
            if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
                goto retry;
            break;
        }
        if (end - ptr < sizeof(tr) + 4)
            break;
        switch (w->type) {      //由待處理事項的type分類處理
            ...
        }
        if (t->buffer->target_node) {
            struct binder_node *target_node = t->buffer->target_node;
            tr.target.ptr = target_node->ptr;
            tr.cookie =  target_node->cookie;
            t->saved_priority = task_nice(current);
            if (t->priority < target_node->min_priority &&
                !(t->flags & TF_ONE_WAY))
                binder_set_nice(t->priority);
            else if (!(t->flags & TF_ONE_WAY) ||
                 t->saved_priority > target_node->min_priority)
                binder_set_nice(target_node->min_priority);
            cmd = BR_TRANSACTION;                                                                   //cmd = BR_TRANSACTION
        } else {
            ...
        }
        ...
        if (put_user(cmd, (uint32_t __user *)ptr))      //把cmd = BR_TRANSACTION寫回到使用者傳進來的緩衝區ptr = bwr.read_buffer,故執行完binder_thread_read後cmd = bwr.read_buffer = BR_TRANSACTION
            return -EFAULT;
        ptr += sizeof(uint32_t);
        if (copy_to_user(ptr, &tr, sizeof(tr)))
            return -EFAULT;
        ptr += sizeof(tr);
    }
    return 0;
}

分析binder_loop(bs, svcmgr_handler)可知:
a. 進入for (;;)死迴圈,執行binder_thread_write
b. 執行binder_thread_read,從binder_thread->todo佇列中取出待處理的事項並處理,處理完thread->todo佇列中待處理的事項後:cmd = bwr.read_buffer = BR_TRANSACTION

總結:
ServiceManager 的啟動分為三步
a. 開啟Binder驅動,建立128K = 128*1024記憶體對映
b. 設定自己(ServiceManager)為Binder的大管家
c. 開啟for迴圈,充當Server的角色,等待Client連線
在Binder驅動程式中為ServiceManager建立了三個結構體:binder_proc、binder_thread、binder_node
binder_node.proc儲存binder_proc,程序間通訊的資料會發送到binder_proc的todo連結串列
binder_proc程序裡有很多執行緒,每個執行緒對應一個binder_thread,每個binder_thread用來處理一個Client的跨程序通訊的請求

2. ServiceManager 註冊服務

以註冊WindowManagerService為例
ServiceManager.addService(Context.WINDOW_SERVICE, wm); //向ServiceManager註冊WMS,wm為WindowManagerService,WindowManagerService繼承
等價:ServiceManager.addService(“window”, new WindowManagerService(…));

frameworks/base/core/java/android/os/ServiceManager.java:
public static void addService(String name, IBinder service) {
    try {
        getIServiceManager().addService(name, service, false);
    } catch (RemoteException e) {
        Log.e(TAG, "error in addService", e);
    }
}

2.1 分析getIServiceManager()

private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }
    // Find the service manager
    sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());      //獲得ServiceManager的代理
    return sServiceManager;
}

2.1.1 分析BinderInternal.getContextObject():

frameworks/base/core/java/com/android/internal/os/BinderInternal.java:
public static final native IBinder getContextObject();

frameworks/base/core/jni/android_util_Binder.cpp:
{ "getContextObject", "()Landroid/os/IBinder;", (void*)android_os_BinderInternal_getContextObject },

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);       //返回new BpBinder(0);
    return javaObjectForIBinder(env, b);                //把這個BpBinder物件轉換成一個BinderProxy物件
}

2.1.1.1 分析sp b = ProcessState::self()->getContextObject(NULL);

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
    return getStrongProxyForHandle(0);
}

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)       //handle = 0
{
    sp<IBinder> result;
    AutoMutex _l(mLock);
    handle_entry* e = lookupHandleLocked(handle);
    if (e != NULL) {
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, NULL, 0);
                if (status == DEAD_OBJECT)
                   return NULL;
            }
            b = new BpBinder(handle);   //b = new BpBinder(0);,最終直接返回b
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}

2.1.1.2 分析return javaObjectForIBinder(env, b);

frameworks/base/core/jni/android_util_Binder.cpp:
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
    jobject object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);      //建立BinderProxy物件
    if (object != NULL) {
        env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());       //把BpBinder物件和BinderProxy物件關聯起來;BinderProxy.mObject成員記錄了new BpBinder(0)物件的地址
        val->incStrong((void*)javaObjectForIBinder);
                ...
        }
    return object;
}

因此:BinderInternal.getContextObject()相當於new BinderProxy(),且BinderProxy.mObject成員記錄了new BpBinder(0)物件的地址

2.1.2 分析ServiceManagerNative.asInterface(…)

ServiceManagerNative.asInterface(new BinderProxy())
frameworks/base/core/java/android/os/ServiceManagerNative.java:
static public IServiceManager asInterface(IBinder obj)      //obj = new BinderProxy(),且BinderProxy.mObject成員記錄了new BpBinder(0)物件的地址
{
    if (obj == null) {
        return null;
    }
    IServiceManager in = (IServiceManager)obj.queryLocalInterface(descriptor);
    if (in != null) {
        return in;
    }   
    return new ServiceManagerProxy(obj);        //返回ServiceManagerProxy,其中ServiceManagerProxy.mRemote = new BinderProxy()
}

class ServiceManagerProxy implements IServiceManager {
    public ServiceManagerProxy(IBinder remote) {
        mRemote = remote;
    }
}

總結:分析getIServiceManager()可知,最終返回ServiceManagerProxy,其中ServiceManagerProxy.mRemote = new BinderProxy(),且BinderProxy.mObject成員記錄了new BpBinder(0)物件的地址

2.2 分析ServiceManagerProxy.addService(…)

因此:getIServiceManager().addService(name, service, false);
等價:ServiceManagerProxy.addService(“window”, new WindowManagerService(…), false);

frameworks/base/core/java/android/os/ServiceManagerNative.java:
class ServiceManagerProxy implements IServiceManager {
    public void addService(String name, IBinder service, boolean allowIsolated)     //name = "window",service = new WindowManagerService(...)
            throws RemoteException {
        Parcel data = Parcel.obtain();      //建立一個Parcel
        Parcel reply = Parcel.obtain();
        /*向Parcel中寫入需要傳輸的資料*/
        data.writeInterfaceToken(IServiceManager.descriptor);
        data.writeString(name);                             //name = "window",向Parcel中寫入服務名"window"
        data.writeStrongBinder(service);            //service = new WindowManagerService(...),向Parcel中寫入服務的本地物件new WindowManagerService(...)
        data.writeInt(allowIsolated ? 1 : 0);
        mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);      //mRemote = new BinderProxy(),且BinderProxy.mObject成員記錄了new BpBinder(0)物件的地址
        reply.recycle();
        data.recycle();
    }
}

2.2.1 分析Parcel.obtain();

frameworks/base/core/java/android/os/Parcel.java:
public static Parcel obtain() {
    ...
    return new Parcel(0);
}

2.2.2 data.writeString(“window”);

frameworks/base/core/java/android/os/Parcel.java:
public final void writeString(String val) {
    nativeWriteString(mNativePtr, val);
}
frameworks/base/core/jni/android_os_Parcel.cpp:
{"nativeWriteString",         "(JLjava/lang/String;)V", (void*)android_os_Parcel_writeString},
static void android_os_Parcel_writeString(JNIEnv* env, jclass clazz, jlong nativePtr, jstring val)
{
    Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
    if (parcel != NULL) {
            ...
        err = parcel->writeString16(str, env->GetStringLength(val));
    }
}
frameworks/native/libs/binder/Parcel.cpp:
status_t Parcel::writeString16(const String16& str)
{
    return writeString16(str.string(), str.size());
}
status_t Parcel::writeString16(const char16_t* str, size_t len)
{
    status_t err = writeInt32(len);     //寫入資料長度
    if (err == NO_ERROR) {
        len *= sizeof(char16_t);
        uint8_t* data = (uint8_t*)writeInplace(len+sizeof(char16_t));       //計算複製資料的目標地址 = mData + mDataPos
        if (data) {
            memcpy(data, str, len); //複製資料到目標地址
            *reinterpret_cast<char16_t*>(data+len) = 0;
            return NO_ERROR;
        }
        err = mError;
    }
    return err;
}
status_t Parcel::writeInt32(int32_t val)
{
    return writeAligned(val);
}
void* Parcel::writeInplace(size_t len)
{
        ...
    uint8_t* const data = mData+mDataPos;       //複製資料的目標地址 = mData+mDataPos
    return data;
}
static void memcpy(void* dst, void* src, size_t size) {
    char* dst_c = (char*) dst, *src_c = (char*) src;
    for (; size > 0; size--) {
        *dst_c++ = *src_c++;
    }
}

分析data.writeString(“window”);可知:data.mData儲存著”window”

2.2.3 分析data.writeStrongBinder(new WindowManagerService(…));

frameworks/base/core/java/android/os/Parcel.java:
public final void writeStrongBinder(IBinder val) {
    nativeWriteStrongBinder(mNativePtr, val);               //val = new WindowManagerService(...)
}
frameworks/base/core/jni/android_os_Parcel.cpp:
{"nativeWriteStrongBinder",   "(JLandroid/os/IBinder;)V", (void*)android_os_Parcel_writeStrongBinder},

static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
    Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
    if (parcel != NULL) {
        const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));      //object = new WindowManagerService(...)
        
            
           

相關推薦

Android系統Binder原始碼情景分析

寫在前面:看過很多大牛寫的Binder詳解,因為講得太過晦澀難懂,所以對於新手好像不太友好,為了讓新手對於Binder有一個大概的認識,故準備了半個月寫了這篇部落格,部落格的大概流程應該是正確的,希望看過的新手能夠有一些收穫。本文主要講解了三個部分:Servic

Android系統原始碼情景分析》連載回憶錄:靈感

       上個月,在花了一年半時間之後,寫了55篇文章,分析完成了Chromium在Android上的實現,以及Android基於Chromium實現的WebView。學到了很多東西,不過也挺累的,平均不到兩個星期一篇文章。本來想休息一段時間後,再繼續分析Chromium

android hardware 簡述(Android系統原始碼情景分析 筆記)

轉自 https://blog.csdn.net/u013377887/article/details/52965988   1.Android原始碼開發的C可執行原始檔一般存在external目錄下  2  Android的幾層框架.  &n

Android系統原始碼情景分析 [羅昇陽著][帶書籤和原始碼]

在組織上,本書劃分為初識Android系統、Android專用驅動系統和Android應用程式框架三大篇。初識Android系統篇介紹了參考書籍、基礎知識以及實驗環境搭建;Android專用驅動系統篇介紹了Logger日誌驅動程式、Binder程序間通訊驅動程

Binder機制情景分析C服務應用

一. 概述 這裡只講下binder的實現原理,不牽扯到android的java層是如何呼叫; 涉及到的會有ServiceManager,led_control_server和test_client的程式碼,這些都是用c寫的.其中led_control_server

Binder機制情景分析linux環境適配

binder安裝 一. 環境 - 執行環境:linux4.1.15 - 開發板為天嵌imx6ul 二. 核心修改 2.1 開啟核心配置選單 make menuconfig 2.2 修改配置 配置驅動 轉到Device Drivers->Android,選

比特幣原始碼情景分析script指令碼驗證(1)

Bitcoin script是一種簡單的指令執行框架1)指令碼概述指令碼主要由兩部分構成:指令碼物件CScript和執行函式VerifyScript。指令碼物件分為兩類:scriptSig和scriptPublicKeyscriptSig位於交易中的txin中,而script

Linux核心原始碼情景分析-特殊檔案系統/proc

    由於proc檔案系統並不物理地存在於任何裝置上,它的安裝過程是特殊的。對proc檔案系統不能直接通過mount()來安裝,而要先由系統核心在核心初始化時自動地通過一個函式kern_mount()安裝一次,然後再由處理系統初始化的程序通過mount()安裝,實際上是"重

比特幣原始碼情景分析bloom filter精讀

上一篇SPV錢包裡utxos同步提到了bloom filter,這一章節我們將從原始碼分析角度來個深度解剖Bloom filter基本原理 An example of a Bloom filter, representing the set {x, y, z}. The co

比特幣原始碼情景分析script指令碼驗證(2)

    通過上一篇的分析,我們應該已經對script有了一定的理解,這章節我們以原始碼分析的方式來了解下指令碼驗證執行流程    bitcoin節點在處理一條交易時就需要驗證交易的txin,由於一條交易可能包含多個txin,因而需要執行多個指令碼驗證,自然需要並行化,因而系統

Android系統原理與原始碼分析(1):利用Java反射技術阻止通過按鈕關閉對話方塊

本文為原創,如需轉載,請註明作者和出處,謝謝!     眾所周知,AlertDialog類用於顯示對話方塊。關於AlertDialog的基本用法在這裡就不詳細介紹了,網上有很多,讀者可以自己搜尋。那

Android系統SD卡分析

1.SD卡掛載流程圖 SD卡的掛載流程圖如下: 綠色箭頭:表示插入SD卡後事件傳遞以及SD卡掛載 紅色箭頭:表示掛載成功後的訊息傳遞流程 黃色箭頭:表示MountService發出掛載/解除安裝SD卡的命令 大家可能對圖中突然出現的這麼多的名稱感到奇怪,這些都是在Andr

Linux核心原始碼情景分析-系統呼叫mknod

    普通檔案可以用open或者create建立,FIFO檔案可以用pipe建立,mknod主要用於裝置檔案的建立。    在核心中,mknod是由sys_mknod實現的,程式碼如下:asmlinkage long sys_mknod(const char * filen

Android服務startService原始碼分析

先看下繼承關係: startService是Context的抽象方法,呼叫startService時,會先呼叫到ContextWrapper的startService方法: @Override public ComponentName startService(Inten

比特幣原始碼情景分析SPV錢包輕量級錢包

SPV錢包最理想的實現方案是,伺服器是全節點,SPV錢包通過伺服器驗證和發起交易,查詢交易歷史,本地做交易封裝,即signRawTransaction和使用者互動。SPV節點不需要執行bitcoin core程式碼,由於需要監聽先的交易事件,需要伺服器通過JPush類似的機制主動通知SPV錢包新的交易等事件。

Android逆向旅---靜態方式分析破解視頻編輯應用「Vue」水印問題

https http mpeg 朋友圈 無需 爆破 資料 不可 fill 一、故事背景 現在很多人都喜歡玩文藝,特別是我身邊的UI們,拍照一分鐘修圖半小時。就是為了能夠在朋友圈顯得逼格高,不過的確是挺好看的,修圖的軟件太多了就不多說了,而且一般都沒有水印啥的。相比較短視頻有

[Android] bindService的binder通訊過程分析

關於bindService方法 public class ContextWrapper extendsContext {    Context mBase;      public ContextWrapper(Conte

Android系統架構特點及優劣分析

Android 架構分析 首先要注意到,Android系統有著極短的開發時間,因此Android在架構上有著四處借鑑的特點。 Android分為四個層,從低到高分別是linux核心層、系統執行庫層、應用程式框架層和應用程式層。在最底層,Android使用了L

Android系統G-sersor除錯

 ----------------------------------------------------------------------------------------------------------------gsensor原理: gsensor的作用是

protoc編譯proto檔案Java原始碼結構分析一(addr.proto)

一、addr.proto option java_package = "com.test.protocol"; option java_outer_classname = "AddressProto"; message Address { opti