深入内核讲明白Android Binder【一】
- 前言
- 一、Android Binder应用编写概述
- 二、基于C语言编写Android Binder跨进程通信Demo
- 0. Demo简介
- 1. 服务的管理者server_manager.c
- 2. Binder服务端代码实现 test_service.c
- 2.1 实现思路
- 2.2 完整实现代码
- 3. Binder客户端代码实现 test_client.c
- 3.1 实现思路
- 3.2 完整实现代码
- 三、Binder Demo源码解析
- 1. test_service.c源码解析
- 1.1 打开驱动
- 1.2 向service_manager注册服务
- 1. bio_init(&msg, iodata, sizeof(iodata), 4);
- 2. bio_put_uint32(&msg, 0); // strict mode header
- 3. bio_put_string16_x(&msg, SVC_MGR_NAME);
- 4. bio_put_string16_x(&msg, name);
- 5. bio_put_obj(&msg, ptr);
- 6. binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE)
- 1.3 死循环监测client端发送的数据
- 0. 进程间交互CMD介绍
- 1. 读取客户端发来的数据
- 2. 解析客户端发来的数据并处理和回复
- 2. test_client.c源码解析
- 1. 从service_manager获取服务service
- 2. 向服务server发送数据,获得返回值
- 3. service_manager.c源码解析
- 1. 告诉binder驱动它是service_manager
- 2. 死循环等待请求
- 2.1 读取数据
- 2.2 解析并处理数据
- 2.2.1 注册服务
- 2.2.2 获取服务
前言
Android Binder基于Binder驱动实现跨进程通信,它是Android进程之间通信的桥梁,是整个Android系统的基石。本文会基于C语言先编写Binder的应用Demo,再基于Demo一步步深入内核分析Binder驱动源码,讲明白Android Binder的工作原理,对Android Binder做到既知其然,又知其所以然。深挖到内核剖析Binder驱动源码,这个技术点还是非常多的,全部写在一篇文章,篇幅太长,容易打消学习的积极性。因此我拆分两篇文章完成讲解,这是开篇,主要讲解如何用Binder编写跨进程的应用,并对编写的Demo进行源码解析,但解析止步于linux内核。至于深入到linux内核部分的源码分析,我们在下一篇文章再进行分析。
特别鸣谢:韦东山老师的Binder课程
本文讲解的Binder示例直接采用韦东山老师课程中的示例
我以往也很少写研究源码的文章,但现在越来越明白阅读优秀源码的重要性,这是从Android源码层次讲解Android技术的开篇文章,后面会继续讲解很多源码级别的技术,这里就多聊一下自己的想法。
Binder其实大部分人都会用,那么会用不就行了吗?为啥要花大量的时间研究源码呢?其实以前我也是这个想法,觉得既然别人已经把轮子造好了,我只要能用好这个轮子就可以了,何必关心轮子具体是怎么造的?其实有这种想法的,只能说我们对于Android开发还是处于表面,暂时未接触到更深入的Android技术,但现在IT行业,特别是Android行业,仅仅停留在表面的开发技术,已然随时会被淘汰,因为表面的技术门槛太低了,这种技术人员已经饱和,若工作了很多年,却依然不知由表及里,停留在啃表面经验,那只能被淘汰!
而想要技术由表及里,就需要对以往经常使用“轮子”,特别是那些“基础轮子”,进行深入研究,不仅要知道怎么用,更要知道它是怎么造出来的。而Binder就是Android世界中最最基础的轮子,以往我们只知道用这个轮子,现在我们要更深一步的拆开这个轮子,一点点剖析清楚它的制造细节。
学习本文不仅能够深入理解Binder的工作原理,更灵活的运用Binder,而且更能够提升你阅读源码的能力,提升“语感”,优秀源码读的多了,技术基础就更加的扎实,技术思路就更加广泛,帮助你脱离技术表面,技术能力更进一阶。
学习技术的过程有点像张三丰教张无忌太极拳一样,先是记住招式,再是逐步忘记招式,最后无招胜有招。
学习一项技术也一样,先是熟练运用这项技术,知道它使用的所有步骤;然后,深入技术内核,明白这个技术的原理;最后就算你忘记了技术使用的步骤,但依然能够基于原理在无意识中正确的使用该技术。
一、Android Binder应用编写概述
上面我们提到Android Binder是进行跨进程通信的技术,也就是A进程要访问B进程所提供的方法,这里我们称A进程为客户端Client,B进程为服务端Server,且B进程提供的方法称为服务Service。
Android系统中的内存分为两部分,一部分是用户空间,一部分是内核空间。不同进程具有不同的用户空间,无法相互访问数据,但是不同进程都可以通过系统调用,如ioctl等去访问内核空间,而驱动程序就是在内核空间中运行,并且可以直接与硬件交互。Binder跨进程通信就是用户空间的进程,通过使用Binder驱动程序,把数据放到内核空间去中转,实现不同进程之间的数据传递。
编写Binder应用涉及三个进程:
- 服务端进程test_server.c用户提供服务
- 客户端进程test_client.c用于使用服务端提供的服务
- service_manager.c用户管理服务。
其中service_manager是Android系统中已经编写好的程序,我们直接使用即可,我们只需要编写自己的服务端和客户端的代码。
为什么需要service_manager?上面我们提到了,Binder跨进程通信,其实就是把数据放到内核空间中转,实现数据的传递,但是Android中的服务太多了,如果没有一个进程去管理这些服务,那内核中的数据将杂乱无章,难以管理,因此,Android系统中有一个service_manager进程,这个进程就是用户管理服务的注册和获取的。后面我们会分析service_manager的源码,搞清楚它到底是如何管理服务的。
点击查看service_manager源码
如上分析,我们将基于C语言编写Android Binder跨进程通信Demo,需要实现test_client作为客户端,test_server作为服务端,且服务端提供sayhello和sayhello_to两个服务,进程间数据传递的概览过程如下图所示:
Binder只需要一次拷贝的核心就在于每个进程打开Binder驱动后,都会通过mmap实现一块映射到内核空间的内存,这块内存,用户空间可以直接使用,而不用先从内核空间拷贝到用户空间。
二、基于C语言编写Android Binder跨进程通信Demo
Android源码framework/native/cmds/servicemanager下是一个C语言实现Binder跨进程通信的半成品bctest.c,我们可以参考bctest.c编写自己的服务。点击查看源代码
0. Demo简介
- Demo中实现一个服务端test_server,它提供hello服务,服务中包含sayhello和sayhello_to两个函数实现;
- Demo中实现一个客户端test_client,它使用服务端test_server的sayhello/sayhello_to函数
1. 服务的管理者server_manager.c
这个类不需要我们自己写,系统已经写好了,源码在framework/native/cmds/servicemanager/server_manager.c,我们只需要写服务提供者test_server.c和服务使用者test_client.c即可,虽然我们不用写service_manager,但后面我们会分析它的源码,它的实现逻辑大致如下:
- 打开binder驱动
- 告诉驱动它是service_manager
- 死循环等待请求
3.1 读取数据
3.2 解析数据
3.3 处理数据(添加/获取服务)
2. Binder服务端代码实现 test_service.c
2.1 实现思路
- 打开驱动
- 向service_manager注册服务
- 死循环监测client端发送的数据
3.1 读数据
3.2 解析数据+处理数据
3.3 回复数据
2.2 完整实现代码
下面我们会对该代码进行详细分析。
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <linux/types.h>
#include<stdbool.h>
#include <string.h>#include <private/android_filesystem_config.h>#include "binder.h"
#include "test_server.h"int svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr)
{int status;unsigned iodata[512/4];struct binder_io msg, reply;bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, SVC_MGR_NAME);bio_put_string16_x(&msg, name);bio_put_obj(&msg, ptr);if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE))return -1;status = bio_get_uint32(&reply);binder_done(bs, &msg, &reply);return status;
}void sayhello(void)
{static int cnt = 0;fprintf(stderr, "say hello : %d\n", ++cnt);
}int sayhello_to(char *name)
{static int cnt = 0;fprintf(stderr, "say hello to %s : %d\n", name, ++cnt);return cnt;
}int hello_service_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{/* 根据txn->code知道要调用哪一个函数* 如果需要参数, 可以从msg取出* 如果要返回结果, 可以把结果放入reply*//* sayhello* sayhello_to*/uint16_t *s;char name[512];size_t len;uint32_t handle;uint32_t strict_policy;int i;// Equivalent to Parcel::enforceInterface(), reading the RPC// header with the strict mode policy mask and the interface name.// Note that we ignore the strict_policy and don't propagate it// further (since we do no outbound RPCs anyway).strict_policy = bio_get_uint32(msg);switch(txn->code) {case HELLO_SVR_CMD_SAYHELLO:sayhello();bio_put_uint32(reply, 0); /* no exception */return 0;case HELLO_SVR_CMD_SAYHELLO_TO:/* 从msg里取出字符串 */s = bio_get_string16(msg, &len); //"IHelloService"s = bio_get_string16(msg, &len); // nameif (s == NULL) {return -1;}for (i = 0; i < len; i++)name[i] = s[i];name[i] = '\0';/* 处理 */i = sayhello_to(name);/* 把结果放入reply */bio_put_uint32(reply, 0); /* no exception */bio_put_uint32(reply, i);break;default:fprintf(stderr, "unknown code %d\n", txn->code);return -1;}return 0;
}int test_server_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{int (*handler)(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply);//从txn->target.ptr获得函数指针handler = (int (*)(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply))txn->target.ptr;//执行函数指针return handler(bs, txn, msg, reply);
}int main(int argc, char **argv)
{int fd;struct binder_state *bs;uint32_t svcmgr = BINDER_SERVICE_MANAGER;uint32_t handle;int ret;bs = binder_open(128*1024);if (!bs) {fprintf(stderr, "failed to open binder driver\n");return -1;}/* add service */ret = svcmgr_publish(bs, svcmgr, "hello", hello_service_handler);if (ret) {fprintf(stderr, "failed to publish hello service\n");return -1;}#if 0while (1){/* read data *//* parse data, and process *//* reply */}
#endifbinder_set_maxthreads(bs, 10);//这个是参照service_manager写的,因为service_manager也是一个服务端,因此它的代码可以借鉴binder_loop(bs, test_server_handler);return 0;
}
3. Binder客户端代码实现 test_client.c
3.1 实现思路
- 打开驱动
- 从service_manager获取服务service
- 向服务server发送数据,获得返回值
3.2 完整实现代码
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <linux/types.h>
#include<stdbool.h>
#include <string.h>#include <private/android_filesystem_config.h>#include "binder.h"
#include "test_server.h"uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name)
{uint32_t handle;unsigned iodata[512/4];struct binder_io msg, reply;bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, SVC_MGR_NAME);bio_put_string16_x(&msg, name);if (binder_call(bs, &msg, &reply, target, SVC_MGR_CHECK_SERVICE))return 0;handle = bio_get_ref(&reply);if (handle)binder_acquire(bs, handle);binder_done(bs, &msg, &reply);return handle;
}struct binder_state *g_bs;
uint32_t g_hello_handle;
uint32_t g_goodbye_handle;void sayhello(void)
{unsigned iodata[512/4];struct binder_io msg, reply;/* 构造binder_io */bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, "IHelloService");/* 放入参数 *//* 调用binder_call */if (binder_call(g_bs, &msg, &reply, g_hello_handle, HELLO_SVR_CMD_SAYHELLO))return ;/* 从reply中解析出返回值 */binder_done(g_bs, &msg, &reply);}int sayhello_to(char *name)
{unsigned iodata[512/4];struct binder_io msg, reply;int ret;int exception;/* 构造binder_io */bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, "IHelloService");/* 放入参数 */bio_put_string16_x(&msg, name);/* 调用binder_call */if (binder_call(g_bs, &msg, &reply, g_hello_handle, HELLO_SVR_CMD_SAYHELLO_TO))return 0;/* 从reply中解析出返回值 */exception = bio_get_uint32(&reply);if (exception)ret = -1;elseret = bio_get_uint32(&reply);binder_done(g_bs, &msg, &reply);return ret;}/* ./test_client hello* ./test_client hello <name>*/int main(int argc, char **argv)
{int fd;struct binder_state *bs;uint32_t svcmgr = BINDER_SERVICE_MANAGER;uint32_t handle;int ret;if (argc < 2){fprintf(stderr, "Usage:\n");fprintf(stderr, "%s <hello|goodbye>\n", argv[0]);fprintf(stderr, "%s <hello|goodbye> <name>\n", argv[0]);return -1;}bs = binder_open(128*1024);if (!bs) {fprintf(stderr, "failed to open binder driver\n");return -1;}g_bs = bs;//向service_manager发送数据,获得hello服务句柄handle = svcmgr_lookup(bs, svcmgr, "hello");if (!handle) {fprintf(stderr, "failed to get hello service\n");return -1;}g_hello_handle = handle;fprintf(stderr, "Handle for hello service = %d\n", g_hello_handle);/* send data to server */if (!strcmp(argv[1], "hello")){if (argc == 2) {sayhello();} else if (argc == 3) {ret = sayhello_to(argv[2]);fprintf(stderr, "get ret of sayhello_to = %d\n", ret); }}binder_release(bs, handle);return 0;
}
三、Binder Demo源码解析
用户空间整体流程如下:
1. test_service.c源码解析
先从入口函数main函数分析。
int main(int argc, char **argv)
{int fd;struct binder_state *bs;// BINDER_SERVICE_MANAGER是0uint32_t svcmgr = BINDER_SERVICE_MANAGER;uint32_t handle;int ret;//打开驱动bs = binder_open(128*1024);if (!bs) {fprintf(stderr, "failed to open binder driver\n");return -1;}//向service_manager添加服务ret = svcmgr_publish(bs, svcmgr, "hello", hello_service_handler);if (ret) {fprintf(stderr, "failed to publish hello service\n");return -1;}//设置该服务的最大线程数binder_set_maxthreads(bs, 10);//死循环,等待解析和处理客户端发来的数据binder_loop(bs, test_server_handler);return 0;
}
从上面的代码也验证了上面我们提到的服务端实现思路,即服务端的实现包括以下几个步骤:
- 打开驱动
- 向service_manager注册服务
- 死循环监测client端发送的数据
下面我们分析源码看看这几个步骤具体都干了啥
1.1 打开驱动
bs = binder_open(128*1024);
if (!bs) {fprintf(stderr, "failed to open binder driver\n");return -1;
}
点击查看binder_open源码
- 分配一块binder_state类型的内存
- 打开驱动
- 设置映射到内核Binder驱动中一块内存的大小 传入的是128*1024,即128Kb
- 映射一块驱动内存。后续test_server进程可以直接获取该块内存,而不用先由内核空间拷贝到用户空间,省去一次拷贝过程
struct binder_state *binder_open(size_t mapsize)
{struct binder_state *bs;struct binder_version vers;//分配一块binder_state类型的内存bs = malloc(sizeof(*bs));if (!bs) {errno = ENOMEM;return NULL;}//打开驱动bs->fd = open("/dev/binder", O_RDWR);if (bs->fd < 0) {fprintf(stderr,"binder: cannot open device (%s)\n",strerror(errno));goto fail_open;}//判断驱动版本是否兼容if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||(vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {fprintf(stderr, "binder: driver version differs from user space\n");goto fail_open;}//映射到内核中一块内存的大小 传入的是128*1024,即128Kbbs->mapsize = mapsize;//映射一块内核Binder驱动的内存。后续test_server进程可以直接获取该块内存,而不用先由内核空间拷贝到用户空间,省去一次拷贝过程bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);if (bs->mapped == MAP_FAILED) {fprintf(stderr,"binder: cannot map device (%s)\n",strerror(errno));goto fail_map;}return bs;fail_map:close(bs->fd);
fail_open:free(bs);return NULL;
}
service_manager_486">1.2 向service_manager注册服务
/* add service */
ret = svcmgr_publish(bs, svcmgr, "hello", hello_service_handler);
if (ret) {fprintf(stderr, "failed to publish hello service\n");return -1;
}
- 构造binder_io数据
- 调用binder_call向service_manager发送数据
int svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr)
{int status;// msg实际存储数据的缓存空间,即512个字节,512*8B=4Kbunsigned iodata[512/4];struct binder_io msg, reply;// 划分iodata缓存给msg,iodata的前16个字节存储其它数据头,剩余的缓存空间用于存储需要发送的数据bio_init(&msg, iodata, sizeof(iodata), 4);// 从iodata的第17个字节开始,占用4个字节的空间写入数据0,同时更新msg->data的指针位置,以及msg->data_avail的剩余有效缓存大小bio_put_uint32(&msg, 0); // strict mode header// #define SVC_MGR_NAME "android.os.IServiceManager"/* 写入service_manager的名称"android.os.IServiceManager"内存存储格式:首先占用4个字节写入字符串的长度,然后每个字符占用2个字节,写入字符串*/bio_put_string16_x(&msg, SVC_MGR_NAME);// 写入要注册的服务名称“hello”bio_put_string16_x(&msg, name);// ptr是函数地址,构造flat_binder_object对象,将ptr写入flat_binder_object->binderbio_put_obj(&msg, ptr);// 调用binder_call向service_manager发送数据if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE))return -1;status = bio_get_uint32(&reply);binder_done(bs, &msg, &reply);return status;
}
1. bio_init(&msg, iodata, sizeof(iodata), 4);
void bio_init(struct binder_io *bio, void *data,size_t maxdata, size_t maxoffs)
{//size_t是无符号整型,32位系统上,占用4个字节,64位系统上,占用8个字节。//我们代码分析都按照4个字节计算。//上面传进来的maxoffs等于4,那么n就等于16,即16个字节size_t n = maxoffs * sizeof(size_t);if (n > maxdata) {bio->flags = BIO_F_OVERFLOW;bio->data_avail = 0;bio->offs_avail = 0;return;}//bio->data = bio->data0都指向data第17个字节,即bio的数据是从data的第17个字节开始存储的,前16个字节用于存储其它数据bio->data = bio->data0 = (char *) data + n;//bio->offs = bio->offs0指向data内存的起点,后面可以通过offs来操作前16个字节空间bio->offs = bio->offs0 = data;//bio->data_avail,bio中data的有效数据长度,即512-16 = 406字节,也就是除了前16个字节,其余都是用于存储bio的data数据bio->data_avail = maxdata - n;//记录有效的偏移量是4,这些偏移空间用来存储data以外的数据bio->offs_avail = maxoffs;bio->flags = 0;
}// Binder.h
struct binder_io
{char *data; /* pointer to read/write from */binder_size_t *offs; /* array of offsets */size_t data_avail; /* bytes available in data buffer */size_t offs_avail; /* entries available in offsets array */char *data0; /* start of data buffer */binder_size_t *offs0; /* start of offsets buffer */uint32_t flags;uint32_t unused;
};
- 内存分布状态
2. bio_put_uint32(&msg, 0); // strict mode header
void bio_put_uint32(struct binder_io *bio, uint32_t n)
{//sizeof(n)=4字节,分配bio的缓存uint32_t *ptr = bio_alloc(bio, sizeof(n));if (ptr)//向缓存中写入整数0,这个整数占用4个字节大小*ptr = n;
}static void *bio_alloc(struct binder_io *bio, size_t size)
{size = (size + 3) & (~3);if (size > bio->data_avail) {bio->flags |= BIO_F_OVERFLOW;return NULL;} else {void *ptr = bio->data;//bio->data指向的空间向右移动4个字节bio->data += size;//更新剩余有效数据缓存大小bio->data_avail -= size;return ptr;}
}
- 内存分布状态
3. bio_put_string16_x(&msg, SVC_MGR_NAME);
void bio_put_string16_x(struct binder_io *bio, const char *_str)
{unsigned char *str = (unsigned char*) _str;size_t len;uint16_t *ptr;if (!str) {bio_put_uint32(bio, 0xffffffff);return;}len = strlen(_str);if (len >= (MAX_BIO_SIZE / sizeof(uint16_t))) {bio_put_uint32(bio, 0xffffffff);return;}/* Note: The payload will carry 32bit size instead of size_t *///将字符串的长度写入缓存,占4个字节bio_put_uint32(bio, len);//移动bio->data指针的位置,分配缓存,每两个字节存储一个字符,同时更新剩余有效缓存的大小ptr = bio_alloc(bio, (len + 1) * sizeof(uint16_t));if (!ptr)return;//向缓存中逐个写入字符while (*str)*ptr++ = *str++;*ptr++ = 0;
}
- 内存分布状态
4. bio_put_string16_x(&msg, name);
源代码和上面一样,这里不再重复贴
- 内存分布状态
5. bio_put_obj(&msg, ptr);
void bio_put_obj(struct binder_io *bio, void *ptr)
{struct flat_binder_object *obj;// 为flat_binder_object分配内存,并将obj指针指向这块内存,同时offs记录flat_binder_object的内存相对位置obj = bio_alloc_obj(bio);if (!obj)return;// 向flat_binder_object内存中写数据obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;obj->type = BINDER_TYPE_BINDER;obj->binder = (uintptr_t)ptr;obj->cookie = 0;
}static struct flat_binder_object *bio_alloc_obj(struct binder_io *bio)
{struct flat_binder_object *obj;// 为flat_binder_object分配内存obj = bio_alloc(bio, sizeof(*obj));// 更新偏移指针offs,指向flat_binder_object的内存起点减去偏移内存长度if (obj && bio->offs_avail) {// offs_avail共有4个字节,用掉一个,就要更新可用偏移量bio->offs_avail--;// offs从内存起点开始*bio->offs++ = ((char*) obj) - ((char*) bio->data0); //指向obj(相对位置,只是为了便于理解)return obj;}bio->flags |= BIO_F_OVERFLOW;return NULL;
}// Binder.h
struct flat_binder_object {struct binder_object_header hdr;__u32 flags;/* 8 bytes of data. */union {binder_uintptr_t binder; /* local object */__u32 handle; /* remote object */};/* extra data associated with local object */binder_uintptr_t cookie;
};
- 内存分布状态
binder_callbs_msg_reply_target_SVC_MGR_ADD_SERVICE_700">6. binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE)
- msg是之前组织好的数据
- reply是用于存储服务端返回的数据
- target = 0,代表要发送msg数据的目的进程,0代表service_manager
- SVC_MGR_ADD_SERVICE 是一个枚举值,代码要调用service_manager服务中添加服务的函数
int binder_call(struct binder_state *bs,struct binder_io *msg, struct binder_io *reply,uint32_t target, uint32_t code)
{int res;struct binder_write_read bwr;struct {uint32_t cmd;struct binder_transaction_data txn;} __attribute__((packed)) writebuf;unsigned readbuf[32];if (msg->flags & BIO_F_OVERFLOW) {fprintf(stderr,"binder: txn buffer overflow\n");goto fail;}writebuf.cmd = BC_TRANSACTION;//ioclt类型writebuf.txn.target.handle = target;//数据发送给哪个进程writebuf.txn.code = code;//调用进程的哪个函数writebuf.txn.flags = 0;writebuf.txn.data_size = msg->data - msg->data0;//数据本身大小writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0);//数据头大小,指向binder_node实体(发送端提供服务函数的地址),bio_put_obj(&msg, ptr);writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0;//指向数据本身内存起点writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0;//指向数据头内存起点bwr.write_size = sizeof(writebuf);bwr.write_consumed = 0;bwr.write_buffer = (uintptr_t) &writebuf;hexdump(msg->data0, msg->data - msg->data0);for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//调用ioctl发送数据给驱动程序if (res < 0) {fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno));goto fail;}res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0);if (res == 0) return 0;if (res < 0) goto fail;}fail:memset(reply, 0, sizeof(*reply));reply->flags |= BIO_F_IOERROR;return -1;
}struct binder_write_read {binder_size_t write_size; /* bytes to write */binder_size_t write_consumed; /* bytes consumed by driver */binder_uintptr_t write_buffer;binder_size_t read_size; /* bytes to read */binder_size_t read_consumed; /* bytes consumed by driver */binder_uintptr_t read_buffer;
};struct binder_transaction_data {/* The first two are only used for bcTRANSACTION and brTRANSACTION,* identifying the target and contents of the transaction.*/union {/* target descriptor of command transaction */__u32 handle;/* target descriptor of return transaction */binder_uintptr_t ptr;} target;binder_uintptr_t cookie; /* target object cookie */__u32 code; /* transaction command *//* General information about the transaction. */__u32 flags;__kernel_pid_t sender_pid;__kernel_uid32_t sender_euid;binder_size_t data_size; /* number of bytes of data */binder_size_t offsets_size; /* number of bytes of offsets *//* If this transaction is inline, the data immediately* follows here; otherwise, it ends with a pointer to* the data buffer.*/union {struct {/* transaction data */binder_uintptr_t buffer;/* offsets from buffer to flat_binder_object structs */binder_uintptr_t offsets;} ptr;__u8 buf[8];} data;
};
1.3 死循环监测client端发送的数据
int test_server_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{//定义函数指针int (*handler)(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply);// 从txn->target.ptr;handler = (int (*)(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply))txn->target.ptr;return handler(bs, txn, msg, reply);
}binder_loop(bs, test_server_handler);
0. 进程间交互CMD介绍
这里先给出进程间交互用到的CMD(即用于判断进程操作类型的标识)结论,后面会结合源码验证。
- 进程发送数据时CMD都是BC_XXX
- 进程接收数据时CMD都是BR_XXX
- 只有BC_TRANSACTION,BR_TRANSACTION,BC_REPLY,BR_REPLY涉及两进程。其它所有的cmd(BC_XXX,BR_XXX)只是app和驱动的交互,用于改变和报告状态
1. 读取客户端发来的数据
void binder_loop(struct binder_state *bs, binder_handler func)
{int res;struct binder_write_read bwr;uint32_t readbuf[32];// write_size=0,说明不是向服务端发数据,而是向客户端读数据bwr.write_size = 0;bwr.write_consumed = 0;bwr.write_buffer = 0;readbuf[0] = BC_ENTER_LOOPER;//将readbuf写入内核驱动,至于怎么写的,后续分析binder_write(bs, readbuf, sizeof(uint32_t));for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;// 读取客户端发来的数据,客户端发送的数据就在服务端映射到内核驱动的那块内存上res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);if (res < 0) {ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));break;}// 服务端解析客户端发来的数据并处理,func就是服务端的处理函数res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);if (res == 0) {ALOGE("binder_loop: unexpected reply?!\n");break;}if (res < 0) {ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));break;}}
}
2. 解析客户端发来的数据并处理和回复
- 解析客户端发来的数据
/*
ptr:读取到的数据缓存地址
size:读取到的数据大小
func:服务端用于处理数据的函数
*/
int binder_parse(struct binder_state *bs, struct binder_io *bio,uintptr_t ptr, size_t size, binder_handler func)
{int r = 1;uintptr_t end = ptr + (uintptr_t) size;while (ptr < end) {uint32_t cmd = *(uint32_t *) ptr;ptr += sizeof(uint32_t);
#if TRACEfprintf(stderr,"%s:\n", cmd_name(cmd));
#endif//服务端是接受来自客户端的数据,因此cmd是BR_TRANSACTIONswitch(cmd) {case BR_NOOP:break;case BR_TRANSACTION_COMPLETE:break;case BR_INCREFS:case BR_ACQUIRE:case BR_RELEASE:case BR_DECREFS:
#if TRACEfprintf(stderr," %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void *)));
#endifptr += sizeof(struct binder_ptr_cookie);break;case BR_SPAWN_LOOPER: {//binder请求服务创建新线程/* create new thread *///if (fork() == 0) {//}pthread_t thread;struct binder_thread_desc btd;btd.bs = bs;btd.func = func;pthread_create(&thread, NULL, binder_thread_routine, &btd);/* in new thread: ioctl(BC_ENTER_LOOPER), enter binder_looper */break;}case BR_TRANSACTION: {//获取的客户端发送来的binder_transaction_data数据struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: txn too small!\n");return -1;}binder_dump_txn(txn);if (func) {unsigned rdata[256/4];struct binder_io msg;struct binder_io reply;int res;// 构造用于回复数据的binder_io reply,这个函数在前面分析过,不再赘述bio_init(&reply, rdata, sizeof(rdata), 4);// 将txn转换为binder_io数据msg,这里的msg就是客户端发送binder_io数据bio_init_from_txn(&msg, txn);// 调用服务端处理函数test_server_handlerres = func(bs, txn, &msg, &reply);//服务端函数处理完数据,把需要回复给客户端的数据reply发送给客户端binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);}ptr += sizeof(*txn);break;}case BR_REPLY: {struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: reply too small!\n");return -1;}binder_dump_txn(txn);if (bio) {bio_init_from_txn(bio, txn);bio = 0;} else {/* todo FREE BUFFER */}ptr += sizeof(*txn);r = 0;break;}case BR_DEAD_BINDER: {struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;ptr += sizeof(binder_uintptr_t);death->func(bs, death->ptr);break;}case BR_FAILED_REPLY:r = -1;break;case BR_DEAD_REPLY:r = -1;break;default:ALOGE("parse: OOPS %d\n", cmd);return -1;}}return r;
}void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;bio->data_avail = txn->data_size;bio->offs_avail = txn->offsets_size / sizeof(size_t);bio->flags = BIO_F_SHARED;
}
- 处理数据:根据客户端的数据,调用服务端的函数进行处理
int test_server_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{int (*handler)(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply);//从txn->target.ptr获得函数指针handler = (int (*)(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply))txn->target.ptr;//执行函数指针,这里函数指针指向的是服务端的hello_service_handler函数return handler(bs, txn, msg, reply);
}int hello_service_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{/* 根据txn->code知道要调用哪一个函数* 如果需要参数, 可以从msg取出* 如果要返回结果, 可以把结果放入reply*//* sayhello* sayhello_to*/uint16_t *s;char name[512];size_t len;uint32_t handle;uint32_t strict_policy;int i;// Equivalent to Parcel::enforceInterface(), reading the RPC// header with the strict mode policy mask and the interface name.// Note that we ignore the strict_policy and don't propagate it// further (since we do no outbound RPCs anyway).strict_policy = bio_get_uint32(msg);//客户端发送来的数据包含code类型switch(txn->code) {case HELLO_SVR_CMD_SAYHELLO://执行服务端的sayhello函数sayhello();//向reply中写入数据0bio_put_uint32(reply, 0); /* no exception */return 0;case HELLO_SVR_CMD_SAYHELLO_TO:/* 从msg里取出字符串 */s = bio_get_string16(msg, &len); //"IHelloService"s = bio_get_string16(msg, &len); // nameif (s == NULL) {return -1;}for (i = 0; i < len; i++)name[i] = s[i];name[i] = '\0';/* 调用服务端sayhello_to函数处理 */i = sayhello_to(name);/* 把结果放入reply */bio_put_uint32(reply, 0); /* no exception */bio_put_uint32(reply, i);break;default:fprintf(stderr, "unknown code %d\n", txn->code);return -1;}return 0;
}void sayhello(void)
{static int cnt = 0;fprintf(stderr, "say hello : %d\n", ++cnt);
}int sayhello_to(char *name)
{static int cnt = 0;fprintf(stderr, "say hello to %s : %d\n", name, ++cnt);return cnt;
}
- 回复数据:服务端向客户端回复数据
void binder_send_reply(struct binder_state *bs,struct binder_io *reply,binder_uintptr_t buffer_to_free,int status)
{struct {uint32_t cmd_free;binder_uintptr_t buffer;uint32_t cmd_reply;struct binder_transaction_data txn;} __attribute__((packed)) data;data.cmd_free = BC_FREE_BUFFER;//server拷贝到service_manager映射的内核态缓冲区的数据,用完后,就可以释放了data.buffer = buffer_to_free;// 回复数据cmd是BC_REPLYdata.cmd_reply = BC_REPLY;data.txn.target.ptr = 0;data.txn.cookie = 0;data.txn.code = 0;if (status) {data.txn.flags = TF_STATUS_CODE;data.txn.data_size = sizeof(int);data.txn.offsets_size = 0;data.txn.data.ptr.buffer = (uintptr_t)&status;data.txn.data.ptr.offsets = 0;} else {// 组织binder_transaction_data数据data.txn.flags = 0;data.txn.data_size = reply->data - reply->data0;data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);data.txn.data.ptr.buffer = (uintptr_t)reply->data0;data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;}//发送数据binder_write(bs, &data, sizeof(data));
}int binder_write(struct binder_state *bs, void *data, size_t len)
{struct binder_write_read bwr;int res;bwr.write_size = len;bwr.write_consumed = 0;bwr.write_buffer = (uintptr_t) data;bwr.read_size = 0;bwr.read_consumed = 0;bwr.read_buffer = 0;/*向客户端发送数据我怎么知道客户端是谁?其实服务端都收到客户端的数据了,自然知道客户端是谁,后面分析transaction_stack会详细介绍*/res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);if (res < 0) {fprintf(stderr,"binder_write: ioctl failed (%s)\n",strerror(errno));}return res;
}
2. test_client.c源码解析
先从入口函数main函数分析。
int main(int argc, char **argv)
{int fd;struct binder_state *bs;uint32_t svcmgr = BINDER_SERVICE_MANAGER;uint32_t handle;int ret;if (argc < 2){fprintf(stderr, "Usage:\n");fprintf(stderr, "%s <hello|goodbye>\n", argv[0]);fprintf(stderr, "%s <hello|goodbye> <name>\n", argv[0]);return -1;}//打开驱动bs = binder_open(128*1024);if (!bs) {fprintf(stderr, "failed to open binder driver\n");return -1;}g_bs = bs;//向service_manager发送数据,获得hello服务句柄handle = svcmgr_lookup(bs, svcmgr, "hello");if (!handle) {fprintf(stderr, "failed to get hello service\n");return -1;}g_hello_handle = handle;fprintf(stderr, "Handle for hello service = %d\n", g_hello_handle);/* 向服务端发送数据 */if (!strcmp(argv[1], "hello")){if (argc == 2) {sayhello();} else if (argc == 3) {ret = sayhello_to(argv[2]);fprintf(stderr, "get ret of sayhello_to = %d\n", ret); }}binder_release(bs, handle);return 0;
}
从上述代码也可以验证我们第二部分提到的客户端实现思路:
- 打开驱动
- 从service_manager获取服务service
- 向服务server发送数据,获得返回值
打开驱动的源码我们上面分析过了,这里不再赘述。
service_managerservice_1223">1. 从service_manager获取服务service
handle = svcmgr_lookup(bs, svcmgr, "hello");
if (!handle) {fprintf(stderr, "failed to get hello service\n");return -1;
}
g_hello_handle = handle;
fprintf(stderr, "Handle for hello service = %d\n", g_hello_handle);
- handle = svcmgr_lookup(bs, svcmgr, “hello”);
/*
target:0 service_manager服务的handle
name:"hello"
即,向service_manager服务获取hello服务
*/
uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name)
{uint32_t handle;unsigned iodata[512/4];struct binder_io msg, reply;// 这里构造的数据和上面向service_manager注册服务时构造的数据一致,上面已经详细介绍,不再赘述bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, SVC_MGR_NAME);bio_put_string16_x(&msg, name);// 构造数据发送给service_manager,调用service_manager的SVC_MGR_CHECK_SERVICE方法,获取服务if (binder_call(bs, &msg, &reply, target, SVC_MGR_CHECK_SERVICE))return 0;// service_manager返回hello服务对应的句柄,后续可以通过handle在service_manager中找到hello服务handle = bio_get_ref(&reply);if (handle)binder_acquire(bs, handle);binder_done(bs, &msg, &reply);return handle;
}
binder_call函数在上面service_manager注册服务的时已经分析过,这里不再赘述。区别点在于解析数据时,客户端的CMD是BR_REPLY
int binder_call(struct binder_state *bs,struct binder_io *msg, struct binder_io *reply,uint32_t target, uint32_t code)
{int res;struct binder_write_read bwr;struct {uint32_t cmd;struct binder_transaction_data txn;} __attribute__((packed)) writebuf;unsigned readbuf[32];if (msg->flags & BIO_F_OVERFLOW) {fprintf(stderr,"binder: txn buffer overflow\n");goto fail;}writebuf.cmd = BC_TRANSACTION;//ioclt类型writebuf.txn.target.handle = target;//数据发送给哪个进程writebuf.txn.code = code;//调用进程的哪个函数writebuf.txn.flags = 0;writebuf.txn.data_size = msg->data - msg->data0;//数据本身大小writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0);//数据头大小,指向binder_node实体(发送端提供服务函数的地址),bio_put_obj(&msg, ptr);writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0;//指向数据本身内存起点writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0;//指向数据头内存起点bwr.write_size = sizeof(writebuf);bwr.write_consumed = 0;bwr.write_buffer = (uintptr_t) &writebuf;hexdump(msg->data0, msg->data - msg->data0);for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//调用ioctl发送数据给驱动程序if (res < 0) {fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno));goto fail;}res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0);if (res == 0) return 0;if (res < 0) goto fail;}fail:memset(reply, 0, sizeof(*reply));reply->flags |= BIO_F_IOERROR;return -1;
}int binder_parse(struct binder_state *bs, struct binder_io *bio,uintptr_t ptr, size_t size, binder_handler func)
{int r = 1;uintptr_t end = ptr + (uintptr_t) size;while (ptr < end) {uint32_t cmd = *(uint32_t *) ptr;ptr += sizeof(uint32_t);
#if TRACEfprintf(stderr,"%s:\n", cmd_name(cmd));
#endif// 客户端收到回复CMD是BR_REPLYswitch(cmd) {case BR_NOOP:break;case BR_TRANSACTION_COMPLETE:break;case BR_INCREFS:case BR_ACQUIRE:case BR_RELEASE:case BR_DECREFS:
#if TRACEfprintf(stderr," %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void *)));
#endifptr += sizeof(struct binder_ptr_cookie);break;case BR_SPAWN_LOOPER: {//binder请求服务创建新线程/* create new thread *///if (fork() == 0) {//}pthread_t thread;struct binder_thread_desc btd;btd.bs = bs;btd.func = func;pthread_create(&thread, NULL, binder_thread_routine, &btd);/* in new thread: ioctl(BC_ENTER_LOOPER), enter binder_looper */break;}case BR_TRANSACTION: {struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: txn too small!\n");return -1;}binder_dump_txn(txn);if (func) {unsigned rdata[256/4];struct binder_io msg;struct binder_io reply;int res;bio_init(&reply, rdata, sizeof(rdata), 4);bio_init_from_txn(&msg, txn);res = func(bs, txn, &msg, &reply);binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);}ptr += sizeof(*txn);break;}case BR_REPLY: {// 获取到服务端发送来的binder_transaction_data数据struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: reply too small!\n");return -1;}binder_dump_txn(txn);if (bio) {// 解析binder_transaction_data数据,存储到binder_io中bio_init_from_txn(bio, txn);bio = 0;} else {/* todo FREE BUFFER */}ptr += sizeof(*txn);r = 0;break;}case BR_DEAD_BINDER: {struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;ptr += sizeof(binder_uintptr_t);death->func(bs, death->ptr);break;}case BR_FAILED_REPLY:r = -1;break;case BR_DEAD_REPLY:r = -1;break;default:ALOGE("parse: OOPS %d\n", cmd);return -1;}}return r;
}
2. 向服务server发送数据,获得返回值
if (!strcmp(argv[1], "hello"))
{if (argc == 2) {sayhello();} else if (argc == 3) {ret = sayhello_to(argv[2]);fprintf(stderr, "get ret of sayhello_to = %d\n", ret); }
}void sayhello(void)
{unsigned iodata[512/4];struct binder_io msg, reply;/* 构造binder_io */bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, "IHelloService");/* 放入参数 *//* 调用binder_call,向hello服务的sayhello发送数据 */if (binder_call(g_bs, &msg, &reply, g_hello_handle, HELLO_SVR_CMD_SAYHELLO))return ;/* 从reply中解析出返回值 */binder_done(g_bs, &msg, &reply);}int sayhello_to(char *name)
{unsigned iodata[512/4];struct binder_io msg, reply;int ret;int exception;/* 构造binder_io */bio_init(&msg, iodata, sizeof(iodata), 4);bio_put_uint32(&msg, 0); // strict mode headerbio_put_string16_x(&msg, "IHelloService");/* 放入参数 */bio_put_string16_x(&msg, name);/* 调用binder_call,向hello服务的sayhello_to发送数据 */if (binder_call(g_bs, &msg, &reply, g_hello_handle, HELLO_SVR_CMD_SAYHELLO_TO))return 0;/* 从reply中解析出返回值 */exception = bio_get_uint32(&reply);if (exception)ret = -1;elseret = bio_get_uint32(&reply);binder_done(g_bs, &msg, &reply);return ret;}
核心代码在上面都已经讲过了,这里不再重复,无非就是构造binder_io数据,然后通过binder_call向hello服务的指定函数发送数据
service_managerc_1494">3. service_manager.c源码解析
上面提到过service_manager不需要我们自己写,系统已经写好了,源码在framework/native/cmds/servicemanager/server_manager.c,根据上面源码的分析我们知道service_manager有两项主要能力:
- 注册服务:如上面源码分析中提到,test_server需要向service_manager注册服务
- 获取服务:如上面源码分析中提到,test_client需要向service_manager获取服务
因为无论是服务端向service_manager注册服务,还是客户端向service_manager获取服务,都是跨进程通讯,都需要调用ioctl,经过内核程序才能到达service_manager,而考虑到内核知识点太多,为了避免篇幅过长,本篇文章暂不分析内核Binder驱动代码,留到下一篇文章再分析,因此,上面的源码分析到ioct就不再继续分析了,导致上面讲解注册服务和获取服务时,都没有进入service_manager源码继续分析。
那么,本节我们也是只分析service_manager的注册服务和获取服务的源码,而不必考虑服务端或者客户端的数据是如何发送给service_manager的,这在下一篇内核源码分析会讲解。
分析service_manager源码,同样我们先从入口函数main函数分析。
int main(int argc, char **argv)
{struct binder_state *bs;bs = binder_open(128*1024);//打开binder驱动if (!bs) {ALOGE("failed to open binder driver\n");return -1;}if (binder_become_context_manager(bs)) {//告诉驱动它是service_managerALOGE("cannot become context manager (%s)\n", strerror(errno));return -1;}svcmgr_handle = BINDER_SERVICE_MANAGER;binder_loop(bs, svcmgr_handler);//死循环等待请求return 0;
}
从上面的代码也能验证我们之前提到的service_manager的实现逻辑:
- 打开binder驱动
- 告诉驱动它是service_manager
- 死循环等待请求
3.1 读取数据
3.2 解析数据
3.3 处理数据(注册/获取服务)
打开驱动的代码之前已经分析过,这里不再赘述。
binderservice_manager_1536">1. 告诉binder驱动它是service_manager
int binder_become_context_manager(struct binder_state *bs)
{ //内核驱动程序源码,下一篇文章分析return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
2. 死循环等待请求
//死循环一直监测客户端发送来的数据
binder_loop(bs, svcmgr_handler);//service_manager服务的处理函数
int svcmgr_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{struct svcinfo *si;uint16_t *s;size_t len;uint32_t handle;uint32_t strict_policy;int allow_isolated;//ALOGI("target=%x code=%d pid=%d uid=%d\n",// txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid);if (txn->target.handle != svcmgr_handle)return -1;if (txn->code == PING_TRANSACTION)return 0;// Equivalent to Parcel::enforceInterface(), reading the RPC// header with the strict mode policy mask and the interface name.// Note that we ignore the strict_policy and don't propagate it// further (since we do no outbound RPCs anyway).strict_policy = bio_get_uint32(msg);s = bio_get_string16(msg, &len); //传入的是android.os.IServiceManagerif (s == NULL) {return -1;}if ((len != (sizeof(svcmgr_id) / 2)) ||memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {//传入的必须是android.os.IServiceManagerfprintf(stderr,"invalid id %s\n", str8(s, len));return -1;}switch(txn->code) {case SVC_MGR_GET_SERVICE:case SVC_MGR_CHECK_SERVICE:s = bio_get_string16(msg, &len);if (s == NULL) {return -1;}handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);if (!handle)break;bio_put_ref(reply, handle);return 0;case SVC_MGR_ADD_SERVICE:s = bio_get_string16(msg, &len);//获得服务名if (s == NULL) {return -1;}handle = bio_get_ref(msg);//获得服务引用号handleallow_isolated = bio_get_uint32(msg) ? 1 : 0;//添加服务if (do_add_service(bs, s, len, handle, txn->sender_euid,allow_isolated, txn->sender_pid))return -1;break;case SVC_MGR_LIST_SERVICES: {uint32_t n = bio_get_uint32(msg);if (!svc_can_list(txn->sender_pid)) {ALOGE("list_service() uid=%d - PERMISSION DENIED\n",txn->sender_euid);return -1;}si = svclist;while ((n-- > 0) && si)si = si->next;if (si) {bio_put_string16(reply, si->name);return 0;}return -1;}default:ALOGE("unknown code %d\n", txn->code);return -1;}bio_put_uint32(reply, 0);//处理完后,最后要构造一个reply,并放入0return 0;
}
2.1 读取数据
void binder_loop(struct binder_state *bs, binder_handler func)
{int res;struct binder_write_read bwr;uint32_t readbuf[32];bwr.write_size = 0;bwr.write_consumed = 0;bwr.write_buffer = 0;readbuf[0] = BC_ENTER_LOOPER;binder_write(bs, readbuf, sizeof(uint32_t));for (;;) {bwr.read_size = sizeof(readbuf);bwr.read_consumed = 0;bwr.read_buffer = (uintptr_t) readbuf;// 读取客户端发送的数据res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);if (res < 0) {ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));break;}// 解析客户端的数据res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);if (res == 0) {ALOGE("binder_loop: unexpected reply?!\n");break;}if (res < 0) {ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));break;}}
}
2.2 解析并处理数据
/*
func:service_manager实现的服务函数svcmgr_handler
*/
int binder_parse(struct binder_state *bs, struct binder_io *bio,uintptr_t ptr, size_t size, binder_handler func)
{int r = 1;uintptr_t end = ptr + (uintptr_t) size;while (ptr < end) {uint32_t cmd = *(uint32_t *) ptr;ptr += sizeof(uint32_t);
#if TRACEfprintf(stderr,"%s:\n", cmd_name(cmd));
#endif // 客户端发送给service_manager的数据,service_manager收到时CMD类型是BR_TRANSACTIONswitch(cmd) {case BR_NOOP:break;case BR_TRANSACTION_COMPLETE:break;case BR_INCREFS:case BR_ACQUIRE:case BR_RELEASE:case BR_DECREFS:
#if TRACEfprintf(stderr," %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void *)));
#endifptr += sizeof(struct binder_ptr_cookie);break;case BR_SPAWN_LOOPER: {//binder请求服务创建新线程/* create new thread *///if (fork() == 0) {//}pthread_t thread;struct binder_thread_desc btd;btd.bs = bs;btd.func = func;pthread_create(&thread, NULL, binder_thread_routine, &btd);/* in new thread: ioctl(BC_ENTER_LOOPER), enter binder_looper */break;}case BR_TRANSACTION: {// 获取客户端发来的binder_transaction_datastruct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: txn too small!\n");return -1;}binder_dump_txn(txn);if (func) {unsigned rdata[256/4];struct binder_io msg;struct binder_io reply;int res;// 初始化replybio_init(&reply, rdata, sizeof(rdata), 4);// 将binder_transaction_data转为binder_iobio_init_from_txn(&msg, txn);// 执行本地处理函数svcmgr_handlerres = func(bs, txn, &msg, &reply);// 把处理结果回复给客户端binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);}ptr += sizeof(*txn);break;}case BR_REPLY: {struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;if ((end - ptr) < sizeof(*txn)) {ALOGE("parse: reply too small!\n");return -1;}binder_dump_txn(txn);if (bio) {bio_init_from_txn(bio, txn);bio = 0;} else {/* todo FREE BUFFER */}ptr += sizeof(*txn);r = 0;break;}case BR_DEAD_BINDER: {struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;ptr += sizeof(binder_uintptr_t);death->func(bs, death->ptr);break;}case BR_FAILED_REPLY:r = -1;break;case BR_DEAD_REPLY:r = -1;break;default:ALOGE("parse: OOPS %d\n", cmd);return -1;}}return r;
}// service_manager本地实现的服务函数
int svcmgr_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{struct svcinfo *si;uint16_t *s;size_t len;uint32_t handle;uint32_t strict_policy;int allow_isolated;//ALOGI("target=%x code=%d pid=%d uid=%d\n",// txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid);if (txn->target.handle != svcmgr_handle)return -1;if (txn->code == PING_TRANSACTION)return 0;// Equivalent to Parcel::enforceInterface(), reading the RPC// header with the strict mode policy mask and the interface name.// Note that we ignore the strict_policy and don't propagate it// further (since we do no outbound RPCs anyway).strict_policy = bio_get_uint32(msg);s = bio_get_string16(msg, &len); //传入的是android.os.IServiceManagerif (s == NULL) {return -1;}if ((len != (sizeof(svcmgr_id) / 2)) ||memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {//传入的必须是android.os.IServiceManagerfprintf(stderr,"invalid id %s\n", str8(s, len));return -1;}switch(txn->code) {case SVC_MGR_GET_SERVICE:case SVC_MGR_CHECK_SERVICE:s = bio_get_string16(msg, &len);if (s == NULL) {return -1;}handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);if (!handle)break;bio_put_ref(reply, handle);return 0;case SVC_MGR_ADD_SERVICE:s = bio_get_string16(msg, &len);//获得服务名if (s == NULL) {return -1;}handle = bio_get_ref(msg);//获得服务引用号handleallow_isolated = bio_get_uint32(msg) ? 1 : 0;//添加服务if (do_add_service(bs, s, len, handle, txn->sender_euid,allow_isolated, txn->sender_pid))return -1;break;case SVC_MGR_LIST_SERVICES: {uint32_t n = bio_get_uint32(msg);if (!svc_can_list(txn->sender_pid)) {ALOGE("list_service() uid=%d - PERMISSION DENIED\n",txn->sender_euid);return -1;}si = svclist;while ((n-- > 0) && si)si = si->next;if (si) {bio_put_string16(reply, si->name);return 0;}return -1;}default:ALOGE("unknown code %d\n", txn->code);return -1;}bio_put_uint32(reply, 0);//处理完后,最后要构造一个reply,并放入0return 0;
}
2.2.1 注册服务
客户端向service_manager注册服务时发送的code值是SVC_MGR_ADD_SERVICE,这在上面的源码分析中可知。
- 向svclist链表中添加注册服务的信息
/*
msg和txn都是客户端发送的数据
*/
int svcmgr_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{......case SVC_MGR_ADD_SERVICE:s = bio_get_string16(msg, &len);//获得服务名if (s == NULL) {return -1;}//获得服务引用号handle=0handle = bio_get_ref(msg);// 注册服务时并没有写入该值,因此allow_isolated为0allow_isolated = bio_get_uint32(msg) ? 1 : 0;//添加服务if (do_add_service(bs, s, len, handle, txn->sender_euid,allow_isolated, txn->sender_pid))return -1;break;......
}
- bio_get_ref(msg);
uint32_t bio_get_ref(struct binder_io *bio)
{struct flat_binder_object *obj;// 获取flat_binder_object,这个对象中存储了服务端需要注册的函数指针obj = _bio_get_obj(bio);// 注册服务时,写入了flat_binder_object,这点在test_server源码分析中可知,因此obj不是NULLif (!obj)return 0;if (obj->type == BINDER_TYPE_HANDLE)// 返回 handlereturn obj->handle;return 0;
}static struct flat_binder_object *_bio_get_obj(struct binder_io *bio)
{size_t n;// 从这儿就验证了之前在test_server源码分析中bio_put_obj写入数据时,offs是一个相对位置size_t off = bio->data - bio->data0;/* TODO: be smarter about this? */for (n = 0; n < bio->offs_avail; n++) {if (bio->offs[n] == off)// 如果有offs指针指向off,代表数据中有flat_binder_object,那么我们获取这个flat_binder_objectreturn bio_get(bio, sizeof(struct flat_binder_object));}bio->data_avail = 0;bio->flags |= BIO_F_OVERFLOW;return NULL;
}static void *bio_get(struct binder_io *bio, size_t size)
{size = (size + 3) & (~3);if (bio->data_avail < size){bio->data_avail = 0;bio->flags |= BIO_F_OVERFLOW;return NULL;} else {void *ptr = bio->data;bio->data += size;bio->data_avail -= size;return ptr;}
}
- do_add_service(bs, s, len, handle, txn->sender_euid, allow_isolated, txn->sender_pid)
/*
s:服务名 hello
len:服务名的长度
allow_isolated: 0
*/
int do_add_service(struct binder_state *bs,const uint16_t *s, size_t len,uint32_t handle, uid_t uid, int allow_isolated,pid_t spid)
{struct svcinfo *si;//ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle,// allow_isolated ? "allow_isolated" : "!allow_isolated", uid);if (!handle || (len == 0) || (len > 127))return -1;if (!svc_can_register(s, len, spid)) {ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",str8(s, len), handle, uid);return -1;}// 查找svclist中是否已经有名字为s的服务si = find_svc(s, len);if (si) {if (si->handle) {ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",str8(s, len), handle, uid);svcinfo_death(bs, si);}// 如果s服务已经存在,则覆盖之前的服务si->handle = handle;} else {//如果链表中查不到svcinfo,则构造svcinfosi = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));if (!si) {ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",str8(s, len), handle, uid);return -1;}si->handle = handle;si->len = len;memcpy(si->name, s, (len + 1) * sizeof(uint16_t));si->name[len] = '\0';si->death.func = (void*) svcinfo_death;si->death.ptr = si;si->allow_isolated = allow_isolated;// 将其放在链表的表头si->next = svclist;svclist = si;}ALOGI("add_service('%s'), handle = %d\n", str8(s, len), handle);//向binder驱动发送BC_ACQUIREbinder_acquire(bs, handle);//向binder驱动发送BC_REQUEST_DEATH_NOTIFICATIONbinder_link_to_death(bs, handle, &si->death);return 0;
}struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{struct svcinfo *si;for (si = svclist; si; si = si->next) {if ((len == si->len) && // 长度相同!memcmp(s16, si->name, len * sizeof(uint16_t))) { // s16和si->name内容相同// 如果svclist中已经包含添加的服务,则返回该服务svcinforeturn si;}}return NULL;
}void binder_acquire(struct binder_state *bs, uint32_t target)
{uint32_t cmd[2];cmd[0] = BC_ACQUIRE;cmd[1] = target;binder_write(bs, cmd, sizeof(cmd));
}void binder_link_to_death(struct binder_state *bs, uint32_t target, struct binder_death *death)
{struct {uint32_t cmd;struct binder_handle_cookie payload;} __attribute__((packed)) data;data.cmd = BC_REQUEST_DEATH_NOTIFICATION;data.payload.handle = target;data.payload.cookie = (uintptr_t) death;binder_write(bs, &data, sizeof(data));
}
2.2.2 获取服务
客户端向service_manager获取服务时发送的code值是SVC_MGR_CHECK_SERVICE,这在上面的源码分析中可知。
int svcmgr_handler(struct binder_state *bs,struct binder_transaction_data *txn,struct binder_io *msg,struct binder_io *reply)
{......case SVC_MGR_CHECK_SERVICE:// 获取服务的名字s = bio_get_string16(msg, &len);if (s == NULL) {return -1;}// 获取服务handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);if (!handle)break;bio_put_ref(reply, handle);return 0;......
}uint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{struct svcinfo *si;if (!svc_can_find(s, len, spid)) {ALOGE("find_service('%s') uid=%d - PERMISSION DENIED\n",str8(s, len), uid);return 0;}// 从svclist中拿到服务名为s的svcinfo,这个函数在上面已经分析,这里不再赘述si = find_svc(s, len);//ALOGI("check_service('%s') handle = %x\n", str8(s, len), si ? si->handle : 0);// 拿到siif (si && si->handle) {if (!si->allow_isolated) {// If this service doesn't allow access from isolated processes,// then check the uid to see if it is isolated.uid_t appid = uid % AID_USER;if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {return 0;}}// 返回服务的handlereturn si->handle;} else {return 0;}
}