Atomic Access in thread

 !Atomic Access(an atomic action cannot stop in the middle;no side effects of an atomic action are visible until the action is complete):
         !Reads and writes are atomic for reference variables and for most primitive variables and for all variables declared volatile
         !cannot be interleaved,so they can be used without fear of thread interference.however,this does not eliminate all need to synchronize atomic actions,because memory consistency errors are still possible.
           !so using volatile variables reduces the risk of memory consistency errors,because any write to a volatile variable established a happens-before relationship with subsequent reads of that same variable.this means that changes to a volatile variable are always visible to other threads.what's more,it also means that when a thread reads a volatile variable,it sees not just the latest change to the volatile,but also the side effects of the code that led up the change.
           !using simple atomic vatiable access is more efficient than accessing these variables through synchronized code, but requires more care by the programmer to avoid memory consistency errors.whether the extra effort is worthwhile depends on the size and complexity of the application.
!Liveness(a concurrent application's ability to execute in a timely manner is known as its liveness)
  most common kind of liveness problem
   !!deadlock:descrilbes a situation where two or more threads are blocked forever,waiting for each other.(strict rule of courtesy bower!)
   !!starvation and livelock:
      !!!Starvation(describes a situation where a thread is unable to gain regular access to shares resources and is unable to make progress.by "greedy" threads).
      !!!Livelock(left-and-right-move-pass)
!Guarded Blocks(Threads often have to coordinate their actions):?????????????????
  !!???????????????????????????????????????????????????????????????????????????????????
!Immutable Objects(if its state cannot changed after it is constructed.a sound strategy for creating simple,reliable code.)
 !!are particularly useful in concurrent applications
  !!!decreased overhead due to garbage collection.
  !!!the elimination of code needed to protect mutable objects from corruption.
!High Level Concurrency Objects
 !!Lock Objects:Synchronized code relies on a simple kind of reentrant lock. This kind of lock is easy to use, but has many limitations. More sophisticated locking idioms are supported by the java.util.concurrency.locks package. We won't examine this package in detail, but instead will focus on its most basic interface, Lock.(The biggest advantage of Lock objects over implicit locks is their ability to back out of an attempt to acquire a lock. )
 !!Executors:(In all of the previous examples, there's a close connection between the task being done by a new thread, as defined by its Runnable object, and the thread itself, as defined by a Thread object. This works well for small applications, but in large-scale applications, it makes sense to separate thread management and creation from the rest of the application. Objects that encapsulate these functions are known as executors. The following subsections describe executors in detail.)
  !!!Executor Interfaces:
    !!!!Executor
    !!!!ExecutorService
    !!!!ScheduledExecutorService
  !!!Thread Pools:
   !!!!
!Atomic Variables:
     
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
atomic_thread_fence is a function in C++ that ensures atomicity of operations in a multi-threaded environment. It is used to prevent unwanted reordering of memory operations by the compiler, CPU or the cache, which may lead to race conditions or other synchronization issues. The function has four possible memory order parameters: - memory_order_acquire: it ensures that all memory operations before the fence are visible to the current thread. - memory_order_release: it ensures that all memory operations after the fence are visible to other threads. - memory_order_acq_rel: it combines the effects of memory_order_acquire and memory_order_release. - memory_order_seq_cst: it ensures that all memory operations before and after the fence are visible to all threads in a sequentially consistent order. Here is an example of a use case for atomic_thread_fence: ``` #include <atomic> #include <thread> #include <iostream> std::atomic<int> x = {0}; std::atomic<int> y = {0}; bool flag = false; void write_x_then_y() { x.store(1, std::memory_order_relaxed); std::atomic_thread_fence(std::memory_order_release); y.store(1, std::memory_order_relaxed); } void read_y_then_x() { while (!flag); std::atomic_thread_fence(std::memory_order_acquire); if (y.load(std::memory_order_relaxed) == 1 && x.load(std::memory_order_relaxed) == 0) { std::cout << "Race condition detected!\n"; } } int main() { std::thread t1(write_x_then_y); std::thread t2(read_y_then_x); t1.join(); flag = true; t2.join(); return 0; } ``` In this example, two threads are created: t1 writes a value to x, then y, while t2 reads y and x in that order. Without the atomic_thread_fence, t2 could read x before y, which would lead to a race condition. However, the use of the acquire and release memory orders ensures that the operations are performed atomically, and the fence prevents reordering.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值