CoreData 深入理解2 (iOS5 以后线程安全与同步)

I've seen a few videos / threads that say it's possible to create 'children' MOCs -- MOCs that use other MOCs as their persistant stores. Useful, for example, in a context where you're threading your application, and want to have a single master MOC that can save / rollback the changes that the child threads create. (From what I understand, a MOC and it's managedObjects MUST all be used on the same thread)

The question is, how do I create a child MOC? I can't track down the WWDC videos I was watching that introduced them, and everything I"ve seen has been talking about how to use them ONCE they're made. I can easily alloc a new MOC, but how do I set it's persistent store to be another MOC? The reference doesn't show any functions that do that!

share | improve this question
  add comment

2 Answers

up vote 16 down vote accepted

Create a new MOC for which you are in total control of the synchronization. This is the same as calling init and the same behavior as pre-parent/child relationships.

NSManagedObjectContext *moc = [[NSManagedObjectContext alloc]
       initWithConcurrencyType:NSConfinementConcurrencyType];

You parent a MOC to another MOC by setting its property:

moc.parentContext = someOtherMocThatIsNowMyParent;

Here, the child chooses the parent. I'm sure my kids wish they were NSManagedObjectContexts. A parent context must be of either NSPrivateQueueConcurrencyType or NSMainQueueConcurrencyType.

You can create a MOC that is "bound" to a private queue:

NSManagedObjectContext *moc = [[NSManagedObjectContext alloc]
       initWithConcurrencyType:NSPrivateQueueConcurrencyType];

which means you should only access it via the performBlock or performBlockAndWait API. You can call those methods from any thread as they will ensure proper serialization of the code in the block. For example...

NSManagedObjectContext *moc = [[NSManagedObjectContext alloc]
       initWithConcurrencyType:NSPrivateQueueConcurrencyType];
moc.parentContext = someOtherMocThatIsNowMyParent;
[moc performBlock:^{
    // All code running in this block will be automatically serialized
    // with respect to all other performBlock or performBlockAndWait
    // calls for this same MOC.
    // Access "moc" to your heart's content inside these blocks of code.
}];

The difference between performBlock and performBlockAndWait is that performBlock will create a block of code, and schedule it with Grand Central Dispatch to be executed asynchronously at some time in the future, on some unknown thread. The method call will return immediately.

performBlockAndWait will do some magic synchronization with all the other performBlock calls, and when all the blocks that have been presented prior to this one are done, this block will execute. The calling thread will pend until this call has completed.

You can also create a MOC that is "bound" to the main thread, just like private concurrency.

NSManagedObjectContext *moc = [[NSManagedObjectContext alloc]
       initWithConcurrencyType:NSMainQueueConcurrencyType];
moc.parentContext = someOtherMocThatIsNowMyParent;
[moc performBlock:^{
    // All code running in this block will be automatically serialized
    // with respect to all other performBlock or performBlockAndWait
    // calls for this same MOC.  Furthermore, it will be serialized with
    // respect to the main thread as well, so this code will run in the
    // main thread -- which is important if you want to do UI work or other
    // stuff that requires the main thread.
}];

which means you should only access it directly if you know you are on the main thread, or via the performBlock or performBlockAndWait API.

Now, you can use the "main concurrency" MOC either via the performBlock methods, or directly if you know you are already running in the main thread.

share | improve this answer
 
 
Thank you for such an (incredibly) thorough answer. –   RonLugge  Sep 4 '12 at 22:15
 
Hello, I know it's really old but my question isn't big enough to create a new one, I was wondering where I should call save on the context, inside the performBlock or outside ? Thank you ! –   ItsASecret  Oct 12 '13 at 16:15
 
Inside performBlock –   Jody Hagins  Oct 16 '13 at 1:57
add comment

Initialize child MOC then:

[_childMOC performBlockAndWait:^{
            [_childMOC setParentContext:parentMOC]; 
 }];
share | improve this answer
 
 
So... I can ACCESS MOCs accross threads, I'm just not allowed to operate (insert, delete, save, rollback) accross them? –   RonLugge  Sep 4 '12 at 21:46
1  
You can call performBlock and performBlockAndWait from other threads. Not much else. –   Jody Hagins  Sep 4 '12 at 22:11 
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值