概述
获取服务是经过调用ServiceManager供给的getService接口实现的,进程与addService注册服务大致相同。区别在于注册服务是Client端带着方针传给服务端,而获取服务是从服务端反应数据给客户端。
因而本文要点研究差异点,以及建立了什么联络使得两个进程能够建立起通信来。
本文的解说结合注册服务篇,相同流程会有省掉,要点解说差异点。
正文
client端建议 getService
本文还是以native层调用方式为切入口解说
IMediaDeathNotifier::getMediaPlayerService()
{
Mutex::Autolock _l(sServiceLock);
if (sMediaPlayerService == 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
binder = sm->getService(String16("media.player"));
if (binder != 0) {
break;
}
ALOGW("Media player service not published, waiting...");
usleep(500000); // 0.5 s
} while (true);
...
}
return sMediaPlayerService;
}
defaultServiceManager() 获取SM,调用getService恳求获取服务,服务名 “media.player”
getService
virtual sp<IBinder> getService(const String16& name) const
{
unsigned n;
// 假设获取失败,重试,最多恳求5次
for (n = 0; n < 5; n++){
sp<IBinder> svc = checkService(name);
if (svc != NULL) return svc;
sleep(1);
}
return NULL;
}
checkService
virtual sp<IBinder> checkService( const String16& name) const
{
Parcel data, reply;
// int32的整形数+字符串(字符串是"android.os.IServiceManager")
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
// 服务的称号,即"media.player"
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
return reply.readStrongBinder();
}
- 封装数据
- 实践建议恳求
1 封装数据
1.1 Parcel::writeInterfaceToken
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()) 中 getInterfaceDescriptor回来”android.os.IServiceManager”
即:data.writeInterfaceToken(“android.os.IServiceManager”)
status_t Parcel::writeInterfaceToken(const String16& interface)
{
// int32的整形数
writeInt32(IPCThreadState::self()->getStrictModePolicy() |
STRICT_MODE_PENALTY_GATHER);
// 字符串("android.os.IServiceManager")
return writeString16(interface);
}
- IPCThreadState::getStrictModePolicy(),回来的是mStrictModePolicy,其初始值是0。writeInt32的调用能够简化为 writeInt32(STRICT_MODE_PENALTY_GATHER)。
- writeString16(interface)即writeString16(“android.os.IServiceManager”)。
这儿写的这两个数据的作用:在ServiceManager中收到数据后,需求依据数据头来判别数据的有效性。这两个数据便是数据头,用作有效性判别的。
1.2 data.writeString16(name)
data.writeString16(name)将MediaPlayerService服务的称号写入到data中,入参name=“media.player”,写入parcel。
2 发动传输
2.1 BpBinder::transact()
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// code: CHECK_SERVICE_TRANSACTION
//初始值为1,在BpBinder构造的时候被赋值
if (mAlive) {
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
mAlive的初始值为1,会调用IPCThreadState::self()->transact()。
2.2 IPCThreadState::transact():
BpBinder::transact() -> IPCThreadState::transact()
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
...
if (err == NO_ERROR) {
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags & TF_ONE_WAY) == 0) {
...
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
...
} else {
...
}
return err;
}
-
函数入参:
handle:恳求方针句柄,BpBinder的mHandle方针,这儿是ServiceManager的代理调用,BpBinder中的mHandle是ServiceManager的句柄,值为0。这个在前面ServiceManager的获取篇中有介绍,创立BpBinder方针时赋值。
code:CHECK_SERVICE_TRANSACTION
data:addService中设置的Parcel方针
reply:用来接纳Binder驱动反应数据的Parcel方针
flags:是默认值0。
-
经过writeTransactionData()将数据打包。
-
本例非异步。上面的数据打包完成后,就要调用waitForResponse()将数据发送给Binder驱动,然后等候Binder驱动反应。
2.2.1 writeTransactionData()
BpBinder::transact() ->IPCThreadState::transact() -> IPCThreadState::writeTransactionData()
读出前面打包到Parcel中的数据,然后将其打包到binder驱动可辨认的 binder_transaction_data 结构体tr中,并将CHECK_SERVICE_TRANSACTION指令和结构体tr打包到Parcel mOut中。
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
// cmd: BC_TRANSACTION code: CHECK_SERVICE_TRANSACTION
binder_transaction_data tr;
tr.target.ptr = 0;
tr.target.handle = handle;
tr.code = code; // code: CHECK_SERVICE_TRANSACTION
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
...
} else {
...
}
mOut.writeInt32(cmd); // cmd: BC_TRANSACTION
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
- ipcDataSize():回来mDataSize:是mData内容的长度,一起也用作指向数据结尾,指示后续写入数据的开端位置
- ipcData():回来mData:是flat_binder_object的开端地址
- ipcObjectsCount():回来mObjectsSize:保存方针的总长度,值是方针个数*sizeof(binder_int_t)
- ipcObjects:回来mObjects:用来保存写入方针的地址的数组
2.2.2 waitForResponse
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
...
– >talkWithDriver
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
binder_write_read bwr;
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
...
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
...
#if defined(HAVE_ANDROID_OS)
// 写驱动
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}
IF_LOG_COMMANDS() {
alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
}
} while (err == -EINTR);
...
把parcel数据赋值到结构体方针binder_write_read bwr,这是驱动可辨认的结构体。
bwr.write_size = outAvail; // mOut中数据巨细,即要传输的数据巨细,大于0
bwr.write_buffer = (long unsigned int)mOut.data(); // mOut中要传输的数据的首地址
bwr.write_consumed = 0;
bwr.read_size = mIn.dataCapacity(); // 256
bwr.read_buffer = (long unsigned int)mIn.data(); // mIn.mData,当时为空
bwr.read_consumed = 0;
写驱动(kernel)ioctl- BINDER_WRITE_READ
binder_ioctl
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
// client端 proc
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret)
goto err_unlocked;
binder_lock(__func__);
// client端 thread
thread = binder_get_thread(proc);
...
switch (cmd) {
case BINDER_WRITE_READ:
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
if (ret)
goto err;
break;
...
}
binder_ioctl_write_read
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
...
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
...
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
...
}
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
...
}
...
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
binder_thread_write
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
...
switch (cmd) {
...
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
...
}
*consumed = ptr - buffer;
}
return 0;
}
binder_transaction
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
struct binder_transaction *t;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end;
binder_size_t off_min;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct list_head *target_list;
wait_queue_head_t *target_wait;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error;
...
// ---榜首部分:获取对端信息,包含对端binder实体、binder_proc等---
if (reply) {
...
} else {
if (tr->target.handle) {
...
} else {
target_node = binder_context_mgr_node;
...
}
target_proc = target_node->proc;
...
}
if (target_thread) {
...
} else {
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
// ---第二部分:创立两个业务并填充相关数据,分别给对端和自己处理---
t = kzalloc(sizeof(*t), GFP_KERNEL);
...
binder_stats_created(BINDER_STAT_TRANSACTION);
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
...
binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
...
// 记载业务t的建议者
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
// 以下把一些数据填充到业务t中
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc;
t->to_thread = target_thread;
t->code = tr->code;
t->flags = tr->flags;
t->priority = task_nice(current);
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
...
t->buffer->allow_user_free = 0;
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
if (target_node)
binder_inc_node(target_node, 1, 0, NULL);
offp = (binder_size_t *)(t->buffer->data +
ALIGN(tr->data_size, sizeof(void *)));
// 用户空间的数据复制到内核
if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)) {
...
}
if (copy_from_user(offp, (const void __user *)(uintptr_t)
tr->data.ptr.offsets, tr->offsets_size)) {
...
}
...
off_end = (void *)offp + tr->offsets_size;
off_min = 0;
// 遍历数据方针,调用getService并没有传输方针,这儿没有数据可取
for (; offp < off_end; offp++) {
...
}
if (reply) {
...
} else if (!(t->flags & TF_ONE_WAY)) {
BUG_ON(t->buffer->async_transaction != 0);
t->need_reply = 1;
t->from_parent = thread->transaction_stack;
// 将当时业务t添加到当时线程的业务栈中
thread->transaction_stack = t;
} else {
...
}
// ---第三部分:业务安排结束,放入对应的行列,并唤醒对端处理---
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);
// 唤醒方针进程
if (target_wait)
wake_up_interruptible(target_wait);
return;
...
}
1 获得对端的binder实体,恳求的servicemanager,binder实体便是全局变量binder_context_mgr_node。经过对端binder实体能够获取到对端binder_proc、binder_thread。
2 生成业务 t和tcomplete,t是对端要处理的使命BINDER_WORK_TRANSACTION承载的是client发出的恳求,tcomplete是本端要处理的使命BINDER_WORK_TRANSACTION_COMPLETE表明发送使命结束。
3 wake_up_interruptible(target_wait) 唤醒对端
binder_thread_read
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
}
...
thread->looper |= BINDER_LOOPER_STATE_WAITING;
if (wait_for_proc_work)
proc->ready_threads++;
binder_unlock(__func__);
... // --->>> 唤醒后从这儿开端
binder_lock(__func__);
if (wait_for_proc_work)
proc->ready_threads--;
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
if (ret)
return ret;
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
// 替代办业务。
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work,
entry);
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work,
entry);
} else {
...
}
// 判别数据是否读完,没有数据就在这儿跳出循环;
if (end - ptr < sizeof(tr) + 4)
break;
switch (w->type) {
...
// 指令传递给用户空间
case BINDER_WORK_TRANSACTION_COMPLETE: {
cmd = BR_TRANSACTION_COMPLETE;
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, cmd);
...
list_del(&w->entry);
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
} break;
...
}
// t没有值,BINDER_WORK_TRANSACTION_COMPLETE不会给t赋值,continue;
if (!t)
continue;
...
}
done:
// bwr.read_consumed
*consumed = ptr - buffer;
...
return 0;
}
把BR_TRANSACTION_COMPLETE传递给用户空间处理。
回到用户空间(user)
IPCThreadState::talkWithDriver()
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
binder_write_read bwr;
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
...
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
...
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
// ---> 驱动完成后跳出这儿
...
} while (err == -EINTR);
...
if (err >= NO_ERROR) {
//清空已写的数据,表明发送的数据驱动现已都接纳了
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
//依据已读数设置mIn坐标
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
...
return NO_ERROR;
}
return err;
}
waitForResponse
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
// talkWithDriver履行结束后持续,循环取出反应的指令BR_NOOP和BR_TRANSACTION_COMPLETE
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
...
// BR_NOOP和BR_TRANSACTION_COMPLETE 两个指令都没有什么实质性动作,略
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
...
}
}
...
return err;
}
从驱动带回来两个指令 BR_NOOP和BR_TRANSACTION_COMPLETE,循环取出处理,两个指令没有什么实质性处理。处理完指令再次进入循环进入到talkWithDriver,此刻写write_size = 0,read_size = mIn.dataCapacity(),再次进入驱动读取。BR_TRANSACTION_COMPLETE的处理与addService文章一致。
进入驱动(kernel)- 等候
waitForResponse -> talkWithDriver -> ioctl -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_read
由于 write_size = 0,read_size = mIn.dataCapacity(),不会履行binder_thread_write,直接进入binder_thread_read。
这儿现已没有业务处理了,binder_thread_read 进入等候。
进入对端流程(kernel)
被唤醒
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
... // 从这儿唤醒
binder_lock(__func__);
if (wait_for_proc_work)
proc->ready_threads--;
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
...
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
// 取出待完成作业
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work,
entry);
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
...
} else {
...
}
if (end - ptr < sizeof(tr) + 4)
break;
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
t = container_of(w, struct binder_transaction, work);
} break;
...
}
if (!t)
continue;
if (t->buffer->target_node) {
// 前面client端创立代办业务时有设置 t->buffer->target_node = target_node
// 便是servicemanager的binder实体
struct binder_node *target_node = t->buffer->target_node;
// ServiceManager的ptr为NULL
tr.target.ptr = target_node->ptr;
// ServiceManager的cookie为NULL
tr.cookie = target_node->cookie;
t->saved_priority = task_nice(current);
if (t->priority < target_node->min_priority &&
!(t->flags & TF_ONE_WAY))
binder_set_nice(t->priority);
else if (!(t->flags & TF_ONE_WAY) ||
t->saved_priority > target_node->min_priority)
binder_set_nice(target_node->min_priority);
cmd = BR_TRANSACTION;
} else {
...
}
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
if (t->from) {
struct task_struct *sender = t->from->proc->tsk;
tr.sender_pid = task_tgid_nr_ns(sender,
task_active_pid_ns(current));
} else {
tr.sender_pid = 0;
}
// 数据巨细
tr.data_size = t->buffer->data_size;
// 数据中方针的偏移数组的巨细(即方针的个数)
tr.offsets_size = t->buffer->offsets_size;
// 数据
tr.data.ptr.buffer = (binder_uintptr_t)(
(uintptr_t)t->buffer->data +
proc->user_buffer_offset);
//数据中方针的偏移数组
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void *));
//将cmd指令写入到ptr,即传递到用户空
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
//将tr数据复制到用户空间
if (copy_to_user(ptr, &tr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
...
//删去已处理的业务
list_del(&t->work.entry);
t->buffer->allow_user_free = 1;
//设置回复信息
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
//该业务会发送给ServiceManager看护进程处理
//ServiceManager处理之后,还需求给Binder驱动回复处理成果
//这儿设置Binder驱动回复信息
t->to_parent = thread->transaction_stack;
// to_thread表明Service Manager反应后,将反应成果交给当时thread进行处理
t->to_thread = thread;
// transaction_stack交易栈保存当时业务。用来记载反应是针对哪个业务的。
thread->transaction_stack = t;
} else {
...
}
break;
}
done:
//更新bwr.read_sonsumed的值
*consumed = ptr - buffer;
...
return 0;
}
1 处理业务BINDER_WORK_TRANSACTION,获取业务内容binder_transaction方针,并把其间的内容封装到binder_transaction_data结构体,然后向用户空间复制指令BR_TRANSACTION和binder_transaction_data数据。
2 更新consumed即bwr.read_consumed的值
履行结束后跳出内核空间进入到ServiceManager的用户空间
进入ServiceManager用户空间(user)
binder_loop
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(uint32_t));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
// ---> 驱动履行结束后来到这儿
...
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
...
}
}
binder_parse
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
...
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
...
if (func) {
unsigned rdata[256/4];
// binder驱动传递的数据
struct binder_io msg;
// 需求经过驱动反应回client端的数据
struct binder_io reply;
int res;
// 初始化replay
bio_init(&reply, rdata, sizeof(rdata), 4);
// 依据binder驱动传递的数据初始化msg
bio_init_from_txn(&msg, txn);
// svcmgr_handler
res = func(bs, txn, &msg, &reply);
// 反应数据给binder驱动
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
...
}
}
return r;
}
处理BR_TRANSACTION,将驱动传递的数据赋值给msg,并创立一个reply用来承载待反应数据,然后调用svcmgr_handler来具体处理传入数据和安排反应数据
svcmgr_handler
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
...
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
// 服务名
s = bio_get_string16(msg, &len);
...
handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);
if (!handle)
break;
bio_put_ref(reply, handle);
return 0;
...
}
bio_put_uint32(reply, 0);
return 0;
}
获取服务名,调用do_find_service经过服务名查找
do_find_service
uint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
struct svcinfo *si = find_svc(s, len);
if (!si || !si->handle) {
return 0;
}
if (!si->allow_isolated) {
uid_t appid = uid % AID_USER;
if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
return 0;
}
}
if (!svc_can_find(s, len, spid)) {
return 0;
}
return si->handle;
}
struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
struct svcinfo *si;
for (si = svclist; si; si = si->next) {
if ((len == si->len) &&
!memcmp(s16, si->name, len * sizeof(uint16_t))) {
return si;
}
}
return NULL;
}
经过服务名从svclist遍历查找到描绘服务的svcinfo,回来其间的handle(引证描绘符)
bio_put_ref
void bio_put_ref(struct binder_io *bio, uint32_t handle)
{
struct flat_binder_object *obj;
if (handle)
obj = bio_alloc_obj(bio);
else
obj = bio_alloc(bio, sizeof(*obj));
if (!obj)
return;
obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
obj->type = BINDER_TYPE_HANDLE;
obj->handle = handle;
obj->cookie = 0;
}
安排flat_binder_object结构体数据,把type为BINDER_TYPE_HANDLE,handle为引证描绘符
binder_send_reply
void binder_send_reply(struct binder_state *bs,
struct binder_io *reply,
binder_uintptr_t buffer_to_free,
int status)
{
struct {
uint32_t cmd_free;
binder_uintptr_t buffer;
uint32_t cmd_reply;
struct binder_transaction_data txn;
} __attribute__((packed)) data;
data.cmd_free = BC_FREE_BUFFER;
data.buffer = buffer_to_free;
data.cmd_reply = BC_REPLY;
data.txn.target.ptr = 0;
data.txn.cookie = 0;
data.txn.code = 0;
if (status) {
...
} else {
data.txn.flags = 0;
data.txn.data_size = reply->data - reply->data0;
data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
}
binder_write(bs, &data, sizeof(data));
}
把数据安排到结构体data,包含BC_FREE_BUFFER、BC_REPLY两个指令,然后调用binder_write恳求驱动
int binder_write(struct binder_state *bs, void *data, size_t len)
{
struct binder_write_read bwr;
int res;
bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
fprintf(stderr,"binder_write: ioctl failed (%s)n",
strerror(errno));
}
return res;
}
binder_write 调用ioctl到驱动,依据write_size = len,read_size = 0,驱动中只履行binder_thread_write
进入驱动(kernel)
binder_thread_write
binder_write -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_write
指令:BC_FREE_BUFFER、BC_REPLY
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
...
switch (cmd) {
...
case BC_FREE_BUFFER: {
binder_uintptr_t data_ptr;
struct binder_buffer *buffer;
if (get_user(data_ptr, (binder_uintptr_t __user *)ptr))
return -EFAULT;
ptr += sizeof(binder_uintptr_t);
buffer = binder_buffer_lookup(proc, data_ptr);
...
if (buffer->transaction) {
buffer->transaction->buffer = NULL;
buffer->transaction = NULL;
}
if (buffer->async_transaction && buffer->target_node) {
BUG_ON(!buffer->target_node->has_async_transaction);
if (list_empty(&buffer->target_node->async_todo))
buffer->target_node->has_async_transaction = 0;
else
list_move_tail(buffer->target_node->async_todo.next, &thread->todo);
}
trace_binder_transaction_buffer_release(buffer);
binder_transaction_buffer_release(proc, buffer, NULL);
binder_free_buf(proc, buffer);
break;
}
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
// 复制用户空间数据
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
// 移动数据指针
ptr += sizeof(tr);
// 处理数据
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
...
}
*consumed = ptr - buffer;
}
return 0;
}
直接看BC_REPLY,把用户空间数据复制进来,调用binder_transaction做处理。
binder_transaction
binder_write -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_write -> binder_transaction
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
struct binder_transaction *t;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end;
binder_size_t off_min;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct list_head *target_list;
wait_queue_head_t *target_wait;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error;
e = binder_transaction_log_add(&binder_transaction_log);
e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
e->from_proc = proc->pid;
e->from_thread = thread->pid;
e->target_handle = tr->target.handle;
e->data_size = tr->data_size;
e->offsets_size = tr->offsets_size;
// 榜首部分:获取对端信息
if (reply) {
// 1 从使命栈获取对端。
// 从使命栈取使命
in_reply_to = thread->transaction_stack;
if (in_reply_to == NULL) {
...
}
// 设置优先级
binder_set_nice(in_reply_to->saved_priority);
if (in_reply_to->to_thread != thread) {
...
}
thread->transaction_stack = in_reply_to->to_parent;
// 依据使命获得对端binder_thread,即建议恳求者
target_thread = in_reply_to->from;
if (target_thread == NULL) {
...
}
if (target_thread->transaction_stack != in_reply_to) {
...
}
// 获取对端binder_proc
target_proc = target_thread->proc;
} else {
...
}
if (target_thread) {
e->to_thread = target_thread->pid;
target_list = &target_thread->todo;
target_wait = &target_thread->wait;
} else {
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
e->to_proc = target_proc->pid;
// 第二部分:创立两个业务t和tcomplete,是分别给对端和自己处理的。
t = kzalloc(sizeof(*t), GFP_KERNEL);
...
binder_stats_created(BINDER_STAT_TRANSACTION);
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
...
binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);
t->debug_id = ++binder_last_id;
e->debug_id = t->debug_id;
...
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL; // reply,走这儿
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc;
t->to_thread = target_thread;
t->code = tr->code;
t->flags = tr->flags;
t->priority = task_nice(current);
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
...
t->buffer->allow_user_free = 0;
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
if (target_node)
binder_inc_node(target_node, 1, 0, NULL);
offp = (binder_size_t *)(t->buffer->data +
ALIGN(tr->data_size, sizeof(void *)));
if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)) {
...
}
if (copy_from_user(offp, (const void __user *)(uintptr_t)
tr->data.ptr.offsets, tr->offsets_size)) {
...
}
...
off_end = (void *)offp + tr->offsets_size;
off_min = 0;
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
...
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {
...
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
// 获取服务引证,这个是在addService时放入SM的proc中的,用handle做查找。
struct binder_ref *ref = binder_get_ref(proc, fp->handle);
...
if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
return_error = BR_FAILED_REPLY;
goto err_binder_get_ref_failed;
}
// ref->node->proc 是要获取的服务(media),target_proc是client端,不相等
if (ref->node->proc == target_proc) {
...
} else {
struct binder_ref *new_ref;
new_ref = binder_get_ref_for_node(target_proc, ref->node);
...
fp->handle = new_ref->desc;
binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
...
}
} break;
...
}
}
if (reply) {
binder_pop_transaction(target_thread, in_reply_to);
} else if (!(t->flags & TF_ONE_WAY)) {
...
} else {
...
}
// 第三部分:业务安排结束,把业务放入对应的行列,并唤醒对端。
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list);
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);
if (target_wait)
wake_up_interruptible(target_wait);
return;
...
}
首要留意这儿是SM建议的恳求,入参 proc、thread是表明的SM。对端自然是client进程端,那么定义的target_xxxx指的都是描绘对端client的相关信息。
这儿要要点解说的是:
-
获取对端信息
- 当时是SM建议的reply,是接纳过client端发来的恳求后为了反应成果建议的恳求,在transaction_stack使命栈中保存着client发来的使命,这是sm端被client从binder_thread_read唤醒binder_thread_read代码中保存的。使命里边寄存的有建议者的binder_thread,是建议者建议恳求时调用binder_transaction代码中赋予的:t->from = thread。因而,SM能够经过使命获取到对端信息:target_thread和target_proc。
-
把从SM用户空间拿到的数据进一步处理
- BINDER_TYPE_HANDLE的处理:1 binder_get_ref经过从sm的用户空间获取到的getService要取的服务的binder引证描绘fp->handle,这是sm对方针服务的binder引证以及引证描绘,2 接下来调用binder_get_ref_for_node函数,便是查找/生成方针proc对该服务的引证,以及生成引证描绘,这才是方针进程(client端进程)对方针服务的引证,3 把新生成的引证描绘赋值给fp->handle。
- 简略来说,便是在SM用户空间拿到的引证描绘,是用来寻觅server的binder实体的,然后再用binder实体生成方针进程(client)对实体的引证。
唤醒client端
binder_thread_read(kernel)
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
}
... // 从这儿唤醒
...
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
// 获取待处理业务
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work,
entry);
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work,
entry);
} else {
...
}
if (end - ptr < sizeof(tr) + 4)
break;
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
t = container_of(w, struct binder_transaction, work);
} break;
....
}
if (!t)
continue;
// reply建议的传输业务不给target_node赋值,走else
if (t->buffer->target_node) {
...
} else {
tr.target.ptr = 0;
tr.cookie = 0;
cmd = BR_REPLY;
}
tr.code = t->code;
tr.flags = t->flags;
tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);
if (t->from) {
struct task_struct *sender = t->from->proc->tsk;
tr.sender_pid = task_tgid_nr_ns(sender,
task_active_pid_ns(current));
} else {
tr.sender_pid = 0;
}
// 数据巨细
tr.data_size = t->buffer->data_size;
// 数据中的方针的偏移数组size(即方针个数)
tr.offsets_size = t->buffer->offsets_size;
// 这块是共享内存,加偏移将地址转化为用户空间虚拟地址,用户空间能够经过地址直接访问
tr.data.ptr.buffer = (binder_uintptr_t)(
(uintptr_t)t->buffer->data +
proc->user_buffer_offset);
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void *));
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
if (copy_to_user(ptr, &tr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
trace_binder_transaction_received(t);
binder_stat_br(proc, thread, cmd);
...
list_del(&t->work.entry);
t->buffer->allow_user_free = 1;
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
...
} else {
t->buffer->transaction = NULL;
kfree(t);
binder_stats_deleted(BINDER_STAT_TRANSACTION);
}
break;
}
done:
*consumed = ptr - buffer;
...
return 0;
}
1 取出待处理业务,业务类型BINDER_WORK_TRANSACTION,获取到binder_transaction t
2 将t中数据转移到tr(用于数据交互的结构体),指令为BR_REPLY,然后复制到用户空间
3 修改consumed值指示数据被消费
来到用户空间,得到两个指令 BR_NOOP和BR_REPLY。处理进程和addService介绍的文章相同,顺次取出指令处理,这儿要点看BR_REPLY的处理,由于这次有回来数据回来,需求处理。
处理对端回来的数据(user)
waitForResponse
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
...
switch (cmd) {
...
case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else {
...
}
} else {
...
}
}
goto finish;
...
}
}
...
return err;
}
ipcSetDataReference
void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,
const binder_size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)
{
binder_size_t minOffset = 0;
freeDataNoInit();
mError = NO_ERROR;
mData = const_cast<uint8_t*>(data);
mDataSize = mDataCapacity = dataSize;
//ALOGI("setDataReference Setting data size of %p to %lu (pid=%d)", this, mDataSize, getpid());
mDataPos = 0;
ALOGV("setDataReference Setting data pos of %p to %zu", this, mDataPos);
mObjects = const_cast<binder_size_t*>(objects);
mObjectsSize = mObjectsCapacity = objectsCount;
mNextObjectHint = 0;
mOwner = relFunc;
mOwnerCookie = relCookie;
for (size_t i = 0; i < mObjectsSize; i++) {
binder_size_t offset = mObjects[i];
if (offset < minOffset) {
ALOGE("%s: bad object offset %" PRIu64 " < %" PRIu64 "n",
__func__, (uint64_t)offset, (uint64_t)minOffset);
mObjectsSize = 0;
break;
}
minOffset = offset + sizeof(flat_binder_object);
}
scanForFds();
}
数据放入到 Parcel reply,履行结束。
回到建议getService的调用
virtual sp<IBinder> checkService( const String16& name) const
{
Parcel data, reply;
// int32的整形数+字符串(字符串是"android.os.IServiceManager")
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
// 服务的称号,即"media.player"
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
return reply.readStrongBinder();
}
能够看出remote()->transact履行结束,reply也现已有了数据,最终履行 reply.readStrongBinder()
readStrongBinder
sp<IBinder> Parcel::readStrongBinder() const
{
sp<IBinder> val;
unflatten_binder(ProcessState::self(), *this, &val);
return val;
}
unflatten_binder
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
const flat_binder_object* flat = in.readObject(false);
if (flat) {
switch (flat->type) {
case BINDER_TYPE_BINDER:
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(NULL, *flat, in);
// BINDER_TYPE_HANDLE 履行这儿
case BINDER_TYPE_HANDLE:
*out = proc->getStrongProxyForHandle(flat->handle);
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}
getStrongProxyForHandle
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
// 经过 handle 在矢量数组mHandleToObject中查找回来,假如没有就新建空壳结构体回来
handle_entry* e = lookupHandleLocked(handle);
if (e != NULL) {
IBinder* b = e->binder; // 空
if (b == NULL || !e->refs->attemptIncWeak(this)) {
if (handle == 0) { // 非 ServiceManager,非0
...
}
// 创立BpBinder,记载有handle
b = new BpBinder(handle);
// bpbinder以及其弱引证存到handle_entry
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}
首要操作:创立一个BpBinder并记载有handle值
持续回来到getMediaPlayerService
/*static*/const sp<IMediaPlayerService>
IMediaDeathNotifier::getMediaPlayerService()
{
Mutex::Autolock _l(sServiceLock);
if (sMediaPlayerService == 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
// bpbinder
binder = sm->getService(String16("media.player"));
if (binder != 0) {
break;
}
usleep(500000); // 0.5 s
} while (true);
if (sDeathNotifier == NULL) {
sDeathNotifier = new DeathNotifier();
}
binder->linkToDeath(sDeathNotifier);
sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
}
return sMediaPlayerService;
}
interface_cast() 模板
interface_cast<IMediaPlayerService>(binder)
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}
// frameworks/av/media/libmedia/IMediaPlayerService.cpp
IMPLEMENT_META_INTERFACE(MediaPlayerService, "android.media.IMediaPlayerService");
和ServiceManager相同套用模板,具体看《Binder机制 – ServiceManager的获取》,这儿直接给出转换后的代码
android::sp<IMediaPlayerService> IMediaPlayerService::asInterface(const android::sp<android::IBinder>& obj)
{
android::sp<IMediaPlayerService> intr;
if (obj != NULL) {
intr = static_cast<IMediaPlayerService*>(
obj->queryLocalInterface(
IMediaPlayerService::descriptor).get());
if (intr == NULL) {
intr = new BpServiceManager(obj);
}
}
return intr;
}