前言
之前,咱们在探求动画及烘托相关原理的时分,咱们输出了几篇文章,回答了
iOS动画是怎么烘托,特效是怎么作业的疑惑
。咱们深感体系设计者在创造这些体系结构的时分,是如此脑洞大开,也深深意识到了解一门技能的底层原理对于从事该方面作业的重要性。
因此咱们决议
进一步探求iOS底层原理的使命
。继上一篇文章对GCD
的
- 线程调度组dispatch_group
dispatch_group_create
:创立调度组dispatch_group_async
:进组的使命 履行dispatch_group_notify
:进组使命履行完毕的告诉dispatch_group_wait
: 进组使命履行等候时间dispatch_group_enter
:使命进组dispatch_group leave
:使命出组- 事情源dispatch_source
dispatch_source_create
:创立源dispatch_source_set_event_handler
: 设置源的回调dispatch_source_merge_data
: 源事情设置数据dispatch_source_get_data
: 获取源事情的数据dispatch_resume
:恢复持续dispatch_suspend
:挂起 探求之后,本篇文章将持续对GCD多线程底层原理的探求
一、多线程的安全隐患
1.多线程拜访同一块内存的隐患
一块资源或许会被多个线程共享,也便是多个线程或许会拜访同一块资源;
当多个线程拜访同一块资源时,很容易引发数据错乱和数据安全问题
2.处理计划:线程同步技能
处理计划是运用线程同步技能
(同步,便是协同步调,按预定的先后次第进行,常见的计划便是加线程锁
线程同步计划
iOS中的线程同步计划有以下这些
-
OSSpinLock
-
os_unfair_lock
-
pthread_mutex
-
dispatch_semaphore
-
dispatch_queue(DISPATCH_QUEUE_SERIAL)
-
NSLock
-
NSRecursiveLock
-
NSCondition
-
NSConditionLock
-
@synchronized
3.实践事例:买票、存钱取钱
问题事例: 卖票和存钱取钱的两个事例 详细见下面代码
@interface BaseDemo: NSObject
- (void)moneyTest;
- (void)ticketTest;
#pragma mark - 露出给子类去运用
- (void)__saveMoney;
- (void)__drawMoney;
- (void)__saleTicket;
@end
@interface BaseDemo()
@property (assign, nonatomic) int money;
@property (assign, nonatomic) int ticketsCount;
@end
@implementation BaseDemo
/**
存钱、取钱演示
*/
- (void)moneyTest
{
self.money = 100;
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_async(queue, ^{
for (int i = 0; i < 10; i++) {
[self __saveMoney];
}
});
dispatch_async(queue, ^{
for (int i = 0; i < 10; i++) {
[self __drawMoney];
}
});
}
/**
存钱
*/
- (void)__saveMoney
{
int oldMoney = self.money;
sleep(.2);
oldMoney += 50;
self.money = oldMoney;
NSLog(@"存50,还剩%d元 - %@", oldMoney, [NSThread currentThread]);
}
/**
取钱
*/
- (void)__drawMoney
{
int oldMoney = self.money;
sleep(.2);
oldMoney -= 20;
self.money = oldMoney;
NSLog(@"取20,还剩%d元 - %@", oldMoney, [NSThread currentThread]);
}
/**
卖1张票
*/
- (void)__saleTicket
{
int oldTicketsCount = self.ticketsCount;
sleep(.2);
oldTicketsCount--;
self.ticketsCount = oldTicketsCount;
NSLog(@"还剩%d张票 - %@", oldTicketsCount, [NSThread currentThread]);
}
/**
卖票演示
*/
- (void)ticketTest
{
self.ticketsCount = 15;
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_async(queue, ^{
for (int i = 0; i < 5; i++) {
[self __saleTicket];
}
});
dispatch_async(queue, ^{
for (int i = 0; i < 5; i++) {
[self __saleTicket];
}
});
dispatch_async(queue, ^{
for (int i = 0; i < 5; i++) {
[self __saleTicket];
}
});
}
@end
@interface ViewController ()
@property (strong, nonatomic) BaseDemo *demo;
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
BaseDemo *demo = [[BaseDemo alloc] init];
[demo ticketTest];
[demo moneyTest];
}
@end
二、线程锁的介绍
1. OSSpinLock
OSSpinLock
叫做”自旋锁”,等候锁的线程会处于忙等(busy-wait)状况,一向占用着CPU资源
用OSSpinLock
来处理上述示例问题
@interface OSSpinLockDemo: BaseDemo
@end
#import "OSSpinLockDemo.h"
#import <libkern/OSAtomic.h>
@interface OSSpinLockDemo()
@property (assign, nonatomic) OSSpinLock moneyLock;
// @property (assign, nonatomic) OSSpinLock ticketLock;
@end
@implementation OSSpinLockDemo
- (instancetype)init
{
if (self = [super init]) {
self.moneyLock = OS_SPINLOCK_INIT;
// self.ticketLock = OS_SPINLOCK_INIT;
}
return self;
}
- (void)__drawMoney
{
OSSpinLockLock(&_moneyLock);
[super __drawMoney];
OSSpinLockUnlock(&_moneyLock);
}
- (void)__saveMoney
{
OSSpinLockLock(&_moneyLock);
[super __saveMoney];
OSSpinLockUnlock(&_moneyLock);
}
- (void)__saleTicket
{
// 不必特点,用一个静态变量也能够
static OSSpinLock ticketLock = OS_SPINLOCK_INIT;
OSSpinLockLock(&ticketLock);
[super __saleTicket];
OSSpinLockUnlock(&ticketLock);
}
@end
1.1 static的问题
上面的ticketLock
也能够用static
来润饰作为内部静态变量来运用
#define OS_SPINLOCK_INIT 0
由于OS_SPINLOCK_INIT
便是0,所以才能够用static
来润饰;static
只能在编译时赋值一个确定值,不能动态赋予一个函数值
// 这样赋值一个函数回来值是会报错的
static OSSpinLock ticketLock = [NSString stringWithFormat:@"haha"];
1.2 OSSpinLock的问题
OSSpinLock
现在现已不再安全,或许会呈现优先级反转问题
由于多线程的实质是在不同线程之间进行来回的调度,每个线程或许对应分配的资源优先级不同;假如优先级低的线程先进行了加锁并准备履行代码,这时优先级高的线程就会在外面循环等候加锁;但因为其优先级高,所以CPU或许会很多的给其分配使命,那么就没办法处理优先级低的线程;优先级低的线程就无法持续往下履行代码,那么也就没办法解锁,所以又会变成了相互等候的局势,形成死锁。这也是苹果现在废弃了OSSpinLock
的原因
处理办法
用尝试加锁OSSpinLockTry
来替换OSSpinLockLock
,假如没有加锁才会进到判别里履行代码并加锁,避免了因上锁了一向在循环等候的问题
// 用卖票的函数来举例,其他几个加锁的办法也是同样
- (void)__saleTicket
{
if (OSSpinLockTry(&_ticketLock)) {
[super __saleTicket];
OSSpinLockUnlock(&_ticketLock);
}
}
1.3 汇编剖析锁OSSpinLock的原理
咱们经过断点来剖析加锁之后做了什么
咱们在卖票的加锁代码处打上断点,并经过转汇编的办法一步步调用剖析
转成汇编后调用OSSpinLockLock
内部会调用_OSSpinLockLockSlow
中心部分,在_OSSpinLockLockSlow
会进行比较,然后履行到断点处又会再次跳回0x7fff5e73326f
再次履行代码
所以经过汇编底层履行逻辑,咱们能看出OSSpinLock
是会不断循环去调用判别的,只要解锁之后才会往下履行代码
1.4 锁的等级
OSSpinLock
自旋锁是高等级的锁(High-level lock),因为会一向循环调用
2. os_unfair_lock
苹果现在用os_unfair_lock
用于取代不安全的OSSpinLock
,从iOS10
开始才支撑
从底层调用看,等候os_unfair_lock锁
的线程会处于休眠状况
,并非忙等
修改示例代码如下
#import "BaseDemo.h"
@interface OSUnfairLockDemo: BaseDemo
@end
#import "OSUnfairLockDemo.h"
#import <os/lock.h>
@interface OSUnfairLockDemo()
@property (assign, nonatomic) os_unfair_lock moneyLock;
@property (assign, nonatomic) os_unfair_lock ticketLock;
@end
@implementation OSUnfairLockDemo
- (instancetype)init
{
if (self = [super init]) {
self.moneyLock = OS_UNFAIR_LOCK_INIT;
self.ticketLock = OS_UNFAIR_LOCK_INIT;
}
return self;
}
- (void)__saleTicket
{
os_unfair_lock_lock(&_ticketLock);
[super __saleTicket];
os_unfair_lock_unlock(&_ticketLock);
}
- (void)__saveMoney
{
os_unfair_lock_lock(&_moneyLock);
[super __saveMoney];
os_unfair_lock_unlock(&_moneyLock);
}
- (void)__drawMoney
{
os_unfair_lock_lock(&_moneyLock);
[super __drawMoney];
os_unfair_lock_unlock(&_moneyLock);
}
@end
假如不写os_unfair_lock_unlock
,那么一切的线程都会卡在os_unfair_lock_lock
进入睡觉,不会再持续履行代码,这种状况叫做死锁
2.1 经过汇编来剖析
咱们也经过断点来剖析加锁之后做了什么
首先会调用os_unfair_lock_lock
然后会调用os_unfair_lock_lock_slow
然后在os_unfair_lock_lock_slow
中会履行__ulock_wait
中心部分,代码履行到syscall
会直接跳出断点,不再履行代码,也便是进入了睡觉
所以经过汇编底层履行逻辑,咱们能看出os_unfair_lock
一旦进行了加锁,就会直接进入休眠,等候解锁后唤醒再持续履行代码,也由此能够认为os_unfair_lock
是互斥锁
syscall
的调用能够理解为体系等级的调用进入睡觉,会直接卡住线程,不再履行代码
2.2 锁的等级
咱们进到os_unfair_lock
的头文件lock.h
,能够看到注释阐明os_unfair_lock
是一个低等级的锁(Low-level lock),因为一旦发现加锁后就会主动进入睡觉
3. pthread_mutex
3.1 互斥锁
mutex
叫做”互斥锁”,等候锁的线程会处于休眠状况
运用代码如下
@interface MutexDemo: BaseDemo
@end
#import "MutexDemo.h"
#import <pthread.h>
@interface MutexDemo()
@property (assign, nonatomic) pthread_mutex_t ticketMutex;
@property (assign, nonatomic) pthread_mutex_t moneyMutex;
@end
@implementation MutexDemo
- (void)__initMutex:(pthread_mutex_t *)mutex
{
// 静态初始化
// 需求在定义这个变量时给予值才能够这么写
// pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
// // 初始化特点
// pthread_mutexattr_t attr;
// pthread_mutexattr_init(&attr);
// pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_DEFAULT);
// // 初始化锁
// pthread_mutex_init(mutex, &attr);
// &attr传NULL默认便是PTHREAD_MUTEX_DEFAULT
pthread_mutex_init(mutex, NULL);
}
- (instancetype)init
{
if (self = [super init]) {
[self __initMutex:&_ticketMutex];
[self __initMutex:&_moneyMutex];
}
return self;
}
- (void)__saleTicket
{
pthread_mutex_lock(&_ticketMutex);
[super __saleTicket];
pthread_mutex_unlock(&_ticketMutex);
}
- (void)__saveMoney
{
pthread_mutex_lock(&_moneyMutex);
[super __saveMoney];
pthread_mutex_unlock(&_moneyMutex);
}
- (void)__drawMoney
{
pthread_mutex_lock(&_moneyMutex);
[super __drawMoney];
pthread_mutex_unlock(&_moneyMutex);
}
- (void)dealloc
{
// 目标销毁时调用
pthread_mutex_destroy(&_moneyMutex);
pthread_mutex_destroy(&_ticketMutex);
}
pthread_mutex_t
实际便是pthread_mutex *
类型
3.2 递归锁
当特点设置为PTHREAD_MUTEX_RECURSIVE
时,就能够作为递归锁来运用
递归锁答应同一个线程对一把锁进行重复加锁;多个线程不能够用递归锁
- (void)__initMutex:(pthread_mutex_t *)mutex
{
// 初始化特点
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);
// 初始化锁
pthread_mutex_init(mutex, &attr);
}
- (void)otherTest
{
pthread_mutex_lock(&_mutex);
NSLog(@"%s", __func__);
static int count = 0;
if (count < 10) {
count++;
[self otherTest];
}
pthread_mutex_unlock(&_mutex);
}
3.3 依据条件来进行加锁
咱们能够设定必定的条件来挑选线程之间的调用进行加锁解锁,示例如下
@interface MutexDemo()
@property (assign, nonatomic) pthread_mutex_t mutex;
@property (assign, nonatomic) pthread_cond_t cond;
@property (strong, nonatomic) NSMutableArray *data;
@end
@implementation MutexDemo
- (instancetype)init
{
if (self = [super init]) {
// 初始化锁
pthread_mutex_init(&_mutex, NULL);
// 初始化条件
pthread_cond_init(&_cond, NULL);
self.data = [NSMutableArray array];
}
return self;
}
- (void)otherTest
{
[[[NSThread alloc] initWithTarget:self selector:@selector(__remove) object:nil] start];
[[[NSThread alloc] initWithTarget:self selector:@selector(__add) object:nil] start];
}
// 线程1
// 删去数组中的元素
- (void)__remove
{
pthread_mutex_lock(&_mutex);
NSLog(@"__remove - begin");
// 假如数据为空,那么设置条件等候唤醒
// 等候期间会先解锁,让其他线程履行代码
if (self.data.count == 0) {
pthread_cond_wait(&_cond, &_mutex);
}
[self.data removeLastObject];
NSLog(@"删去了元素");
pthread_mutex_unlock(&_mutex);
}
// 线程2
// 往数组中增加元素
- (void)__add
{
pthread_mutex_lock(&_mutex);
sleep(1);
[self.data addObject:@"Test"];
NSLog(@"增加了元素");
// 一旦增加了元素,变发送条件信号,让等候删去的条件持续履行代码,并再次加锁
// 信号(告诉一个条件)
pthread_cond_signal(&_cond);
// 播送(告诉一切条件)
// pthread_cond_broadcast(&_cond);
pthread_mutex_unlock(&_mutex);
}
- (void)dealloc
{
pthread_mutex_destroy(&_mutex);
pthread_cond_destroy(&_cond);
}
@end
3.4 经过汇编来剖析
咱们经过断点来剖析加锁之后做了什么
首先会履行pthread_mutex_lock
然后会履行pthread_mutex_firstfit_lock_slow
然后会履行pthread_mutex_firstfit_lock_wait
然后会履行__psynch_mutexwait
中心部分,在__psynch_mutexwait
里,代码履行到syscall
会直接跳出断点,不再履行代码,也便是进入了睡觉
所以pthread_mutex
和os_unfair_lock
相同,都是在加锁之后会进入到睡觉
3.5 锁的等级
pthread_mutex
和os_unfair_lock
相同,都是低等级的锁(Low-level lock)
4. NSLock
NSLock
是对mutex
一般锁的封装
NSLock
恪守了<NSLocking>
协议,支撑以下两个办法
@protocol NSLocking
- (void)lock;
- (void)unlock;
@end
其他常用办法
// 尝试解锁
- (BOOL)tryLock;
// 设定一个时间等候加锁,时间到了假如还不能成功加锁就回来NO
- (BOOL)lockBeforeDate:(NSDate *)limit;
详细运用看下面代码
@interface NSLockDemo: BaseDemo
@end
@interface NSLockDemo()
@property (strong, nonatomic) NSLock *ticketLock;
@property (strong, nonatomic) NSLock *moneyLock;
@end
@implementation NSLockDemo
- (instancetype)init
{
if (self = [super init]) {
self.ticketLock = [[NSLock alloc] init];
self.moneyLock = [[NSLock alloc] init];
}
return self;
}
- (void)__saleTicket
{
[self.ticketLock lock];
[super __saleTicket];
[self.ticketLock unlock];
}
- (void)__saveMoney
{
[self.moneyLock lock];
[super __saveMoney];
[self.moneyLock unlock];
}
- (void)__drawMoney
{
[self.moneyLock lock];
[super __drawMoney];
[self.moneyLock unlock];
}
@end
4.1 剖析底层完成
由于NSLock
是不开源的,咱们能够经过GNUstep Base
来剖析详细完成
找到NSLock.m
能够看到initialize
初始化办法里是创立的pthread_mutex_t
目标,所以能够确定NSLock
是对pthread_mutex
的面向目标的封装
@implementation NSLock
+ (id) allocWithZone: (NSZone*)z
{
if (self == baseLockClass && YES == traceLocks)
{
return class_createInstance(tracedLockClass, 0);
}
return class_createInstance(self, 0);
}
+ (void) initialize
{
static BOOL beenHere = NO;
if (beenHere == NO)
{
beenHere = YES;
/* Initialise attributes for the different types of mutex.
* We do it once, since attributes can be shared between multiple
* mutexes.
* If we had a pthread_mutexattr_t instance for each mutex, we would
* either have to store it as an ivar of our NSLock (or similar), or
* we would potentially leak instances as we couldn't destroy them
* when destroying the NSLock. I don't know if any implementation
* of pthreads actually allocates memory when you call the
* pthread_mutexattr_init function, but they are allowed to do so
* (and deallocate the memory in pthread_mutexattr_destroy).
*/
pthread_mutexattr_init(&attr_normal);
pthread_mutexattr_settype(&attr_normal, PTHREAD_MUTEX_NORMAL);
pthread_mutexattr_init(&attr_reporting);
pthread_mutexattr_settype(&attr_reporting, PTHREAD_MUTEX_ERRORCHECK);
pthread_mutexattr_init(&attr_recursive);
pthread_mutexattr_settype(&attr_recursive, PTHREAD_MUTEX_RECURSIVE);
/* To emulate OSX behavior, we need to be able both to detect deadlocks
* (so we can log them), and also hang the thread when one occurs.
* the simple way to do that is to set up a locked mutex we can
* force a deadlock on.
*/
pthread_mutex_init(&deadlock, &attr_normal);
pthread_mutex_lock(&deadlock);
baseConditionClass = [NSCondition class];
baseConditionLockClass = [NSConditionLock class];
baseLockClass = [NSLock class];
baseRecursiveLockClass = [NSRecursiveLock class];
tracedConditionClass = [GSTracedCondition class];
tracedConditionLockClass = [GSTracedConditionLock class];
tracedLockClass = [GSTracedLock class];
tracedRecursiveLockClass = [GSTracedRecursiveLock class];
untracedConditionClass = [GSUntracedCondition class];
untracedConditionLockClass = [GSUntracedConditionLock class];
untracedLockClass = [GSUntracedLock class];
untracedRecursiveLockClass = [GSUntracedRecursiveLock class];
}
}
5.NSRecursiveLock
NSRecursiveLock
也是对mutex递归锁
的封装,API
跟NSLock
根本一致
6. NSCondition
NSCondition
是对mutex
和cond
的封装
详细运用代码如下
@interface NSConditionDemo()
@property (strong, nonatomic) NSCondition *condition;
@property (strong, nonatomic) NSMutableArray *data;
@end
@implementation NSConditionDemo
- (instancetype)init
{
if (self = [super init]) {
self.condition = [[NSCondition alloc] init];
self.data = [NSMutableArray array];
}
return self;
}
- (void)otherTest
{
[[[NSThread alloc] initWithTarget:self selector:@selector(__remove) object:nil] start];
[[[NSThread alloc] initWithTarget:self selector:@selector(__add) object:nil] start];
}
// 线程1
// 删去数组中的元素
- (void)__remove
{
[self.condition lock];
NSLog(@"__remove - begin");
if (self.data.count == 0) {
// 等候
[self.condition wait];
}
[self.data removeLastObject];
NSLog(@"删去了元素");
[self.condition unlock];
}
// 线程2
// 往数组中增加元素
- (void)__add
{
[self.condition lock];
sleep(1);
[self.data addObject:@"Test"];
NSLog(@"增加了元素");
// 信号
[self.condition signal];
// 播送
// [self.condition broadcast];
[self.condition unlock];
}
@end
6.1 剖析底层完成
NSCondition
也恪守了NSLocking
协议,阐明其内部现已封装了锁的相关代码
@interface NSCondition : NSObject <NSLocking> {
@private
void *_priv;
}
- (void)wait;
- (BOOL)waitUntilDate:(NSDate *)limit;
- (void)signal;
- (void)broadcast;
@property (nullable, copy) NSString *name API_AVAILABLE(macos(10.5), ios(2.0), watchos(2.0), tvos(9.0));
@end
咱们经过GNUstep Base
也能够看到其初始化办法里对pthread_mutex_t
进行了封装
@implementation NSCondition
+ (id) allocWithZone: (NSZone*)z
{
if (self == baseConditionClass && YES == traceLocks)
{
return class_createInstance(tracedConditionClass, 0);
}
return class_createInstance(self, 0);
}
+ (void) initialize
{
[NSLock class]; // Ensure mutex attributes are set up.
}
- (id) init
{
if (nil != (self = [super init]))
{
if (0 != pthread_cond_init(&_condition, NULL))
{
DESTROY(self);
}
else if (0 != pthread_mutex_init(&_mutex, &attr_reporting))
{
pthread_cond_destroy(&_condition);
DESTROY(self);
}
}
return self;
}
6.2 NSConditionLock
NSConditionLock
是对NSCondition
的进一步封装,能够设置详细的条件值
经过设置条件值能够对线程做依靠操控履行次序,详细运用见示例代码
@interface NSConditionLockDemo()
@property (strong, nonatomic) NSConditionLock *conditionLock;
@end
@implementation NSConditionLockDemo
- (instancetype)init
{
// 创立的时分能够设置一个条件
// 假如不设置,默认便是0
if (self = [super init]) {
self.conditionLock = [[NSConditionLock alloc] initWithCondition:1];
}
return self;
}
- (void)otherTest
{
[[[NSThread alloc] initWithTarget:self selector:@selector(__one) object:nil] start];
[[[NSThread alloc] initWithTarget:self selector:@selector(__two) object:nil] start];
[[[NSThread alloc] initWithTarget:self selector:@selector(__three) object:nil] start];
}
- (void)__one
{
// 不需求任何条件,只要没有锁就加锁
[self.conditionLock lock];
NSLog(@"__one");
sleep(1);
[self.conditionLock unlockWithCondition:2];
}
- (void)__two
{
// 依据对应条件来加锁
[self.conditionLock lockWhenCondition:2];
NSLog(@"__two");
sleep(1);
[self.conditionLock unlockWithCondition:3];
}
- (void)__three
{
[self.conditionLock lockWhenCondition:3];
NSLog(@"__three");
[self.conditionLock unlock];
}
@end
// 打印的先后次序为:1、2、3
7. dispatch_queue_t
咱们能够直接运用GCD
的串行行列,也是能够完成线程同步的,详细代码能够参阅GCD
部分的示例代码
8.dispatch_semaphore
semaphore
叫做”信号量”
信号量的初始值,能够用来操控线程并发拜访的最大数量
示例代码如下
@interface SemaphoreDemo()
@property (strong, nonatomic) dispatch_semaphore_t semaphore;
@property (strong, nonatomic) dispatch_semaphore_t ticketSemaphore;
@property (strong, nonatomic) dispatch_semaphore_t moneySemaphore;
@end
@implementation SemaphoreDemo
- (instancetype)init
{
if (self = [super init]) {
// 初始化信号量
// 最多只开5条线程,也便是能够5条线程一起拜访同一块空间,然后加锁,其他线程再进来就只能等候了
self.semaphore = dispatch_semaphore_create(5);
// 最多只开1条线程
self.ticketSemaphore = dispatch_semaphore_create(1);
self.moneySemaphore = dispatch_semaphore_create(1);
}
return self;
}
- (void)__drawMoney
{
dispatch_semaphore_wait(self.moneySemaphore, DISPATCH_TIME_FOREVER);
[super __drawMoney];
dispatch_semaphore_signal(self.moneySemaphore);
}
- (void)__saveMoney
{
dispatch_semaphore_wait(self.moneySemaphore, DISPATCH_TIME_FOREVER);
[super __saveMoney];
dispatch_semaphore_signal(self.moneySemaphore);
}
- (void)__saleTicket
{
// 假如信号量的值>0就减1,然后往下履行代码
// 当信号量的值<=0时,当时线程就会进入休眠等候(直到信号量的值>0)
dispatch_semaphore_wait(self.ticketSemaphore, DISPATCH_TIME_FOREVER);
[super __saleTicket];
// 让信号量的值加1
dispatch_semaphore_signal(self.ticketSemaphore);
}
@end
9. @synchronized
@synchronized
是对mutex
递归锁的封装
示例代码如下
@interface SynchronizedDemo: BaseDemo
@end
@implementation SynchronizedDemo
- (void)__drawMoney
{
// @synchronized需求加锁的是同一个目标才行
@synchronized([self class]) {
[super __drawMoney];
}
}
- (void)__saveMoney
{
@synchronized([self class]) { // objc_sync_enter
[super __saveMoney];
} // objc_sync_exit
}
- (void)__saleTicket
{
static NSObject *lock;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
lock = [[NSObject alloc] init];
});
@synchronized(lock) {
[super __saleTicket];
}
}
@end
9.1 源码剖析
咱们能够经过程序运转中转汇编看到,终究都会调用objc_sync_enter
咱们能够经过objc4
中objc-sync.mm
来剖析对应的源码完成
int objc_sync_enter(id obj)
{
int result = OBJC_SYNC_SUCCESS;
if (obj) {
SyncData* data = id2data(obj, ACQUIRE);
ASSERT(data);
data->mutex.lock();
} else {
// @synchronized(nil) does nothing
if (DebugNilSync) {
_objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
}
objc_sync_nil();
}
return result;
}
能够看到会依据传进来的obj
找到对应的SyncData
typedef struct alignas(CacheLineSize) SyncData {
struct SyncData* nextData;
DisguisedPtr<objc_object> object;
int32_t threadCount; // number of THREADS using this block
recursive_mutex_t mutex;
} SyncData;
在找到SyncData
里面的成员变量recursive_mutex_t
的真实类型,里面有一个递归锁
using recursive_mutex_t = recursive_mutex_tt<LOCKDEBUG>;
class recursive_mutex_tt : nocopy_t {
os_unfair_recursive_lock mLock; // 递归锁
public:
constexpr recursive_mutex_tt() : mLock(OS_UNFAIR_RECURSIVE_LOCK_INIT) {
lockdebug_remember_recursive_mutex(this);
}
constexpr recursive_mutex_tt(__unused const fork_unsafe_lock_t unsafe)
: mLock(OS_UNFAIR_RECURSIVE_LOCK_INIT)
{ }
void lock()
{
lockdebug_recursive_mutex_lock(this);
os_unfair_recursive_lock_lock(&mLock);
}
void unlock()
{
lockdebug_recursive_mutex_unlock(this);
os_unfair_recursive_lock_unlock(&mLock);
}
void forceReset()
{
lockdebug_recursive_mutex_unlock(this);
bzero(&mLock, sizeof(mLock));
mLock = os_unfair_recursive_lock OS_UNFAIR_RECURSIVE_LOCK_INIT;
}
bool tryLock()
{
if (os_unfair_recursive_lock_trylock(&mLock)) {
lockdebug_recursive_mutex_lock(this);
return true;
}
return false;
}
bool tryUnlock()
{
if (os_unfair_recursive_lock_tryunlock4objc(&mLock)) {
lockdebug_recursive_mutex_unlock(this);
return true;
}
return false;
}
void assertLocked() {
lockdebug_recursive_mutex_assert_locked(this);
}
void assertUnlocked() {
lockdebug_recursive_mutex_assert_unlocked(this);
}
};
然后咱们剖析获取SyncData
的完成办法id2data
,经过obj
从LIST_FOR_OBJ
真正取出数据
static SyncData* id2data(id object, enum usage why)
{
spinlock_t *lockp = &LOCK_FOR_OBJ(object);
SyncData **listp = &LIST_FOR_OBJ(object);
SyncData* result = NULL;
#if SUPPORT_DIRECT_THREAD_KEYS
// Check per-thread single-entry fast cache for matching object
bool fastCacheOccupied = NO;
SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
if (data) {
fastCacheOccupied = YES;
if (data->object == object) {
// Found a match in fast cache.
uintptr_t lockCount;
result = data;
lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY);
if (result->threadCount <= 0 || lockCount <= 0) {
_objc_fatal("id2data fastcache is buggy");
}
switch(why) {
case ACQUIRE: {
lockCount++;
tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
break;
}
case RELEASE:
lockCount--;
tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
if (lockCount == 0) {
// remove from fast cache
tls_set_direct(SYNC_DATA_DIRECT_KEY, NULL);
// atomic because may collide with concurrent ACQUIRE
OSAtomicDecrement32Barrier(&result->threadCount);
}
break;
case CHECK:
// do nothing
break;
}
return result;
}
}
#endif
// Check per-thread cache of already-owned locks for matching object
SyncCache *cache = fetch_cache(NO);
if (cache) {
unsigned int i;
for (i = 0; i < cache->used; i++) {
SyncCacheItem *item = &cache->list[i];
if (item->data->object != object) continue;
// Found a match.
result = item->data;
if (result->threadCount <= 0 || item->lockCount <= 0) {
_objc_fatal("id2data cache is buggy");
}
switch(why) {
case ACQUIRE:
item->lockCount++;
break;
case RELEASE:
item->lockCount--;
if (item->lockCount == 0) {
// remove from per-thread cache
cache->list[i] = cache->list[--cache->used];
// atomic because may collide with concurrent ACQUIRE
OSAtomicDecrement32Barrier(&result->threadCount);
}
break;
case CHECK:
// do nothing
break;
}
return result;
}
}
// Thread cache didn't find anything.
// Walk in-use list looking for matching object
// Spinlock prevents multiple threads from creating multiple
// locks for the same new object.
// We could keep the nodes in some hash table if we find that there are
// more than 20 or so distinct locks active, but we don't do that now.
lockp->lock();
{
SyncData* p;
SyncData* firstUnused = NULL;
for (p = *listp; p != NULL; p = p->nextData) {
if ( p->object == object ) {
result = p;
// atomic because may collide with concurrent RELEASE
OSAtomicIncrement32Barrier(&result->threadCount);
goto done;
}
if ( (firstUnused == NULL) && (p->threadCount == 0) )
firstUnused = p;
}
// no SyncData currently associated with object
if ( (why == RELEASE) || (why == CHECK) )
goto done;
// an unused one was found, use it
if ( firstUnused != NULL ) {
result = firstUnused;
result->object = (objc_object *)object;
result->threadCount = 1;
goto done;
}
}
// Allocate a new SyncData and add to list.
// XXX allocating memory with a global lock held is bad practice,
// might be worth releasing the lock, allocating, and searching again.
// But since we never free these guys we won't be stuck in allocation very often.
posix_memalign((void **)&result, alignof(SyncData), sizeof(SyncData));
result->object = (objc_object *)object;
result->threadCount = 1;
new (&result->mutex) recursive_mutex_t(fork_unsafe_lock);
result->nextData = *listp;
*listp = result;
done:
lockp->unlock();
if (result) {
// Only new ACQUIRE should get here.
// All RELEASE and CHECK and recursive ACQUIRE are
// handled by the per-thread caches above.
if (why == RELEASE) {
// Probably some thread is incorrectly exiting
// while the object is held by another thread.
return nil;
}
if (why != ACQUIRE) _objc_fatal("id2data is buggy");
if (result->object != object) _objc_fatal("id2data is buggy");
#if SUPPORT_DIRECT_THREAD_KEYS
if (!fastCacheOccupied) {
// Save in fast thread cache
tls_set_direct(SYNC_DATA_DIRECT_KEY, result);
tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
} else
#endif
{
// Save in thread cache
if (!cache) cache = fetch_cache(YES);
cache->list[cache->used].data = result;
cache->list[cache->used].lockCount = 1;
cache->used++;
}
}
return result;
}
LIST_FOR_OBJ
是一个哈希表,哈希表的完成便是将传进来的obj
作为key
,然后对应的锁为value
#define LIST_FOR_OBJ(obj) sDataLists[obj].data // 哈希表
static StripedMap<SyncList> sDataLists;
// 哈希表的完成便是将传进来的目标作为key,然后对应的锁为value
经过源码剖析咱们也能够看出,@synchronized
内部的锁是递归锁
三、锁的比较
1. 功能比较排序
下面是每个锁按功能从高到低排序
- os_unfair_lock
- OSSpinLock
- dispatch_semaphore
- pthread_mutex
- dispatch_queue(DISPATCH_QUEUE_SERIAL)
- NSLock
- NSCondition
- pthread_mutex(recursive)
- NSRecursiveLock
- NSConditionLock
- @synchronized
挑选性最高的锁
- dispatch_semaphore
- pthread_mutex
2.互斥锁、自旋锁的比较
2.1 什么状况运用自旋锁
- 估计线程等候锁的时间很短
- 加锁的代码(临界区)经常被调用,但竞赛状况很少发生
- CPU资源不严重
- 多核处理器
2.2 什么状况运用互斥锁
- 估计线程等候锁的时间较长
- 单核处理器(尽量削减CPU的消耗)
- 临界区有IO操作(IO操作比较占用CPU资源)
- 临界区代码杂乱或许循环量大
- 临界区竞赛十分激烈
专题系列文章
1.前知识
- 01-探求iOS底层原理|综述
- 02-探求iOS底层原理|编译器LLVM项目【Clang、SwiftC、优化器、LLVM】
- 03-探求iOS底层原理|LLDB
- 04-探求iOS底层原理|ARM64汇编
2. 根据OC言语探求iOS底层原理
- 05-探求iOS底层原理|OC的实质
- 06-探求iOS底层原理|OC目标的实质
- 07-探求iOS底层原理|几种OC目标【实例目标、类目标、元类】、目标的isa指针、superclass、目标的办法调用、Class的底层实质
- 08-探求iOS底层原理|Category底层结构、App启动时Class与Category装载过程、load 和 initialize 履行、关联目标
- 09-探求iOS底层原理|KVO
- 10-探求iOS底层原理|KVC
- 11-探求iOS底层原理|探求Block的实质|【Block的数据类型(实质)与内存布局、变量捕获、Block的品种、内存办理、Block的润饰符、循环引证】
- 12-探求iOS底层原理|Runtime1【isa详解、class的结构、办法缓存cache_t】
- 13-探求iOS底层原理|Runtime2【消息处理(发送、转发)&&动态办法解析、super的实质】
- 14-探求iOS底层原理|Runtime3【Runtime的相关使用】
- 15-探求iOS底层原理|RunLoop【两种RunloopMode、RunLoopMode中的Source0、Source1、Timer、Observer】
- 16-探求iOS底层原理|RunLoop的使用
- 17-探求iOS底层原理|多线程技能的底层原理【GCD源码剖析1:主行列、串行行列&&并行行列、全局并发行列】
- 18-探求iOS底层原理|多线程技能【GCD源码剖析1:dispatch_get_global_queue与dispatch_(a)sync、单例、线程死锁】
- 19-探求iOS底层原理|多线程技能【GCD源码剖析2:栅栏函数dispatch_barrier_(a)sync、信号量dispatch_semaphore】
- 20-探求iOS底层原理|多线程技能【GCD源码剖析3:线程调度组dispatch_group、事情源dispatch Source】
- 21-探求iOS底层原理|多线程技能【线程锁:自旋锁、互斥锁、递归锁】
- 22-探求iOS底层原理|多线程技能【原子锁atomic、gcd Timer、NSTimer、CADisplayLink】
- 23-探求iOS底层原理|内存办理【Mach-O文件、Tagged Pointer、目标的内存办理、copy、引证计数、weak指针、autorelease
3. 根据Swift言语探求iOS底层原理
关于函数
、枚举
、可选项
、结构体
、类
、闭包
、特点
、办法
、swift多态原理
、String
、Array
、Dictionary
、引证计数
、MetaData
等Swift根本语法和相关的底层原理文章有如下几篇:
- Swift5中心语法1-基础语法
- Swift5中心语法2-面向目标语法1
- Swift5中心语法2-面向目标语法2
- Swift5常用中心语法3-其它常用语法
- Swift5使用实践常用技能点
其它底层原理专题
1.底层原理相关专题
- 01-计算机原理|计算机图形烘托原理这篇文章
- 02-计算机原理|移动终端屏幕成像与卡顿
2.iOS相关专题
- 01-iOS底层原理|iOS的各个烘托结构以及iOS图层烘托原理
- 02-iOS底层原理|iOS动画烘托原理
- 03-iOS底层原理|iOS OffScreen Rendering 离屏烘托原理
- 04-iOS底层原理|因CPU、GPU资源消耗导致卡顿的原因和处理计划
3.webApp相关专题
- 01-Web和类RN大前端的烘托原理
4.跨渠道开发计划相关专题
- 01-Flutter页面烘托原理
5.阶段性总结:Native、WebApp、跨渠道开发三种计划功能比较
- 01-Native、WebApp、跨渠道开发三种计划功能比较
6.Android、HarmonyOS页面烘托专题
- 01-Android页面烘托原理
-
02-HarmonyOS页面烘托原理 (
待输出
)
7.小程序页面烘托专题
- 01-小程序结构烘托原理