并发编程5

synchronized关键字原理

回顾

公平锁和非公平锁的区别(acquire方法和tryAcquire方法都不同)

  • 区别一:首先非公平锁直接去cas尝试获取锁,如果获取锁失败,再执行非公平锁版的acuqire()方法,而公平锁会直接执行公平锁版的acquire方法

  • 如果非公平锁获取锁失败,也就是两者都去执行tryAcquire方法

    • 如果c!=0(这把锁没被人持有),后面执行的方法是一样的

    • 区别二:如果c==0(这把锁被人持有),多执行了一个hasQueuedPredecessors方法

      • 公平锁会去判断队列中是否有人排队,通过hasQueuedPredecessors()方法判断,如果有人排队,进行park(此时队列有没有初始化情况是不一样的)

        a.如果自己是第一个排队的,则自旋

        b.如果不是,park后自己也去排队

      • 非公平锁不会去判断队列,也没有入队操作,直接就会cas获取锁

    • 如果此时再获取锁失败,进入到acquireQueued(addWaiter(Node.EXCLUSIVE), arg)部分,公平锁和非公平锁就没有区别了

Synchronized是一把非公平锁

  • 多线程并发竞争资源,打印出获取锁的线程顺序是倒序的?也算有序,那是公平锁吗?
  • 但是synchronized是一把非公平锁,为什么?
  • 其实公平锁和非公平锁打印出来都可能是无序的,因为cpu是有时间片轮转的,之前的测试代码为什么打印出来是有序的,因为执行力Thread.sleep(2000),保证了线程的执行有序,那什么时候用公平锁呢?
    • 如果线程被阻塞了,希望线程是按一定顺序醒来的,即第二个线程的正确执行依赖于第一个线程的执行结果。

synchronized原语

  • 显示运行java程序,生成一个class文件,然后通过javap -c Test1 > test.txt 反编译命令将class文件转换成jvm级别的汇编指令,这个汇编不同于机器语言汇编,只是jvm能够识别

  • synchronized转换成了moniterenter 汇编指令和moniterexit 汇编指令

  • 抛出异常时也会退出锁,会执行moniterexit汇编指令

  • moniterenter 在 bytecodeInterpreter.cpp中

    • CASE(_monitorenter):{

      BasicObjectLock*

BasicObjectLock

  • 解析synchronized关键字,会创建一个线程私有栈,(62位)里面一个有个lock record (锁记录)

lock record

  • lock record是一个线程私有栈

    • _displaced_header,存放markword里面的内存,markword里面存放整个lock record的指针地址

    • obj reference,指向对象地址

    • bytecodeInterpreter.cpp中的部分代码

    // 把当前锁对象关联到lr的obj
    entry->set_obj(lockee);
    int success = false;
    uintptr_t epoch_mask_in_place = (uintptr_t)markOopDesc::epoch_mask_in_place;
    // mark word
    markOop mark = lockee->mark();
    intptr_t hash = (intptr_t) markOopDesc::no_hash;
    
    • // JVM有没有把偏向禁用
      if (mark->has_bias_pattern()) {
          uintptr_t thread_ident;
          uintptr_t anticipated_bias_locking_value;
          thread_ident = (uintptr_t)istate->thread();
          // 把mark word存储内容拿出来计算,如果anticipated_bias_locking_value等0,则说明当前线程是偏向自己的
          // 如果是第二次进入,并且是同一个线程重复加锁,判断当前线程id和持有锁的线程id是否相同,如果等于0,则说明是同一个线程
          // 性能极快的位运算
          anticipated_bias_locking_value =
              (((uintptr_t)lockee->klass()->prototype_header() | thread_ident) ^ (uintptr_t)mark) &
              ~((uintptr_t) markOopDesc::age_mask_in_place);
      
          // 1.判断是否是偏向自己
          if  (anticipated_bias_locking_value == 0) {
              // already biased towards this thread, nothing to do
              if (PrintBiasedLockingStatistics) {
                  (* BiasedLocking::biased_lock_entry_count_addr())++;
              }
              // 如果是偏向自己,则令success等于true
              success = true;
          }
          // 2.重偏向
          else if ((anticipated_bias_locking_value & markOopDesc::biased_lock_mask_in_place) != 0) {
              // try revoke bias
              markOop header = lockee->klass()->prototype_header();
              if (hash != markOopDesc::no_hash) {
                  header = header->copy_set_hash(hash);
              }
              if (Atomic::cmpxchg_ptr(header, lockee->mark_addr(), mark) == mark) {
                  if (PrintBiasedLockingStatistics)
                      (*BiasedLocking::revoked_lock_entry_count_addr())++;
              }
          }
          // 3.epoch_mask_in_place是偏向锁时间戳,判断偏向锁是否过期
          else if ((anticipated_bias_locking_value & epoch_mask_in_place) !=0) {
              // try rebias
              markOop new_header = (markOop) ( (intptr_t) lockee->klass()->prototype_header() | thread_ident);
              if (hash != markOopDesc::no_hash) {
                  new_header = new_header->copy_set_hash(hash);
              }
              if (Atomic::cmpxchg_ptr((void*)new_header, lockee->mark_addr(), mark) == mark) {
                  if (PrintBiasedLockingStatistics)
                      (* BiasedLocking::rebiased_lock_entry_count_addr())++;
              }
              else {
                  CALL_VM(InterpreterRuntime::monitorenter(THREAD, entry), handle_exception);
              }
              success = true;
          }
          // 4.是否匿名可偏向
          // 第一次加锁,没有线程ID
          else {
              // try to bias towards thread in case object is anonymously biased
              markOop header = (markOop) ((uintptr_t) mark & ((uintptr_t)markOopDesc::biased_lock_mask_in_place |
                                                              (uintptr_t)markOopDesc::age_mask_in_place |
                                                              epoch_mask_in_place));
              if (hash != markOopDesc::no_hash) {
                  header = header->copy_set_hash(hash);
              }
              markOop new_header = (markOop) ((uintptr_t) header | thread_ident);
              // debugging hint
              DEBUG_ONLY(entry->lock()->set_displaced_header((markOop) (uintptr_t) 0xdeaddead);)
                  if (Atomic::cmpxchg_ptr((void*)new_header, lockee->mark_addr(), header) == header) {
                      if (PrintBiasedLockingStatistics)
                          (* BiasedLocking::anonymously_biased_lock_entry_count_addr())++;
                  }
              else {
                  CALL_VM(InterpreterRuntime::monitorenter(THREAD, entry), handle_exception);
              }
              success = true;
          }
      }
      
      
      // success字段指示是否获取锁成功
      // 当前偏向线程不是对象头里面的线程
      // traditional lightweight locking 
      // 此时是不能偏向的,进入轻量锁加锁逻辑
      if (!success) {
          // 首先产生一个无锁的mark word--00
          // 00......00000000001
          markOop displaced = lockee->mark()->set_unlocked();
          entry->lock()->set_displaced_header(displaced);
          bool call_vm = UseHeavyMonitors;
          
          
          // call_vm是恒失败的,jvm执行指令恒false
          // 虚拟机使用 CAS 操作尝试将锁对象的 Mark Word 更新为指向锁记录的指针。如果更新成功,这个线程就获得了该对象的锁
          // entry: 指向Lock Record的指针
          // lockee->mark_addr(): 锁对象的Mark Word地址
          // displaced: 锁对象的Mark Word
          
          // 因为是cas操作,只有锁对象的对象头等于上面 markOop displaced = lockee->mark()->set_unlocked();这种无锁状态,才能cas成功,也就是必须等第一个线程释放锁,才能cas成功,此时是无锁状态cas成功,此时进不去if方法,因为返回的也是displaced,而displaced == displaced,所以进不去if方法
          // 如果cas失败,说明不是无锁状态,而返回的结果说明displaced!=displaced,已经被别的线程轻量锁加锁成功了,则cas失败,此时进入if方法,进行轻量锁的加锁
          
          // cas成功会变成lock record中记录的是001,同时会把mark word变成lr的指针+00,后面一步是同时发生的,不能显示在代码中
          
          // 注意lock record是每一个线程的线程私有栈私有的,不是公有对象!!!!
          
          // 此处轻量锁的场景就是两个锁交替执行获取锁,不会有资源的竞争
          // 如果不是交替执行,发生资源竞争,则会进入InterpreterRuntime::monitorenter方法
          if (call_vm || Atomic::cmpxchg_ptr(entry, lockee->mark_addr(), displaced) != displaced) {
              // Is it simple recursive case?
              // 轻量锁的重入,因为是有锁状态,所以上面cas也会失败
              // 会又新建一个lock record,但是里面什么都没有,通过lock recotd的条数来记录重入的次数
              if (!call_vm && THREAD->is_lock_owned((address) displaced->clear_lock_bits())) {
                  entry->lock()->set_displaced_header(NULL);
              }
              // 进入进一步的加锁流程--可能膨胀成重量锁
              else {
                  CALL_VM(InterpreterRuntime::monitorenter(THREAD, entry), handle_exception);
              }
          }
      }
      // cpu执行下一条指令
      UPDATE_PC_AND_TOS_AND_CONTINUE(1, -1);
      } else {
          istate->set_msg(more_monitors);
          UPDATE_PC_AND_RETURN(0); // Re-execute
      }
      

为什么轻量锁效率变低了,看解锁流程 — 需要还原对象头信息

  • CASE(_monitorexit): {
        oop lockee = STACK_OBJECT(-1);
        CHECK_NULL(lockee);
        // derefing's lockee ought to provoke implicit null check
        // find our monitor slot
        BasicObjectLock* limit = istate->monitor_base();
        BasicObjectLock* most_recent = (BasicObjectLock*) istate->stack_base();
        while (most_recent != limit ) {
            if ((most_recent)->obj() == lockee) {
                BasicLock* lock = most_recent->lock();
                markOop header = lock->displaced_header();
                // 偏向锁直接把对象头设置为null即可
                most_recent->set_obj(NULL);
                // 判断如果不是偏向锁
                // 需要把lock record存储的mark word信息还原到对象头里面
                if (!lockee->mark()->has_bias_pattern()) {
                    bool call_vm = UseHeavyMonitors;
                    // If it isn't recursive we either must swap old header or call the runtime
                    if (header != NULL || call_vm) {
                        if (call_vm || Atomic::cmpxchg_ptr(header, lockee->mark_addr(), lock) != lock) {
                            // restore object for the slow case
                            most_recent->set_obj(lockee);
                            CALL_VM(InterpreterRuntime::monitorexit(THREAD, most_recent), handle_exception);
                        }
                    }
                }
                UPDATE_PC_AND_TOS_AND_CONTINUE(1, -1);
            }
            most_recent++;
        }
        // Need to throw illegal monitor state exception
        CALL_VM(InterpreterRuntime::throw_illegal_monitor_state_exception(THREAD), handle_exception);
        ShouldNotReachHere();
    }
    

轻量锁加锁流程

  • InterpreterRuntime.cpp的monitorenter

  • //%note monitor_1
    IRT_ENTRY_NO_ASYNC(void, InterpreterRuntime::monitorenter(JavaThread* thread, BasicObjectLock* elem))
    #ifdef ASSERT
      thread->last_frame().interpreter_frame_verify_monitor(elem);
    #endif
      if (PrintBiasedLockingStatistics) {
        Atomic::inc(BiasedLocking::slow_path_entry_count_addr());
      }
      Handle h_obj(thread, elem->obj());
      assert(Universe::heap()->is_in_reserved_or_null(h_obj()),
             "must be NULL or an object");
      // 并不是马上膨胀
      // 是否开启了偏向模式
      // 为什么还要判断是否开启偏向模式,因为需要去做偏向锁的撤销
      if (UseBiasedLocking) {
        // Retry fast entry if bias is revoked to avoid unnecessary inflation
        ObjectSynchronizer::fast_enter(h_obj, elem->lock(), true, CHECK);
      } 
      // 没有开启偏向模式
      else {
        ObjectSynchronizer::slow_enter(h_obj, elem->lock(), CHECK);
      }
      assert(Universe::heap()->is_in_reserved_or_null(elem->obj()),
             "must be NULL or an object");
    #ifdef ASSERT
      thread->last_frame().interpreter_frame_verify_monitor(elem);
    #endif
    IRT_END
    

偏向锁的撤销

  • 取消jvm中的偏向延迟设置,即恢复4秒之内不加偏向锁
  • 为什么要有偏向延迟?— 偏向延迟是4秒,因为偏向锁的偏向撤销是很消耗性能的,而且jvm虚拟机启动的时候也要启动很多锁,而且启动的锁不可能是使用偏向锁的情况,为了避免多余的偏向锁撤销,消耗性能,所以把偏向延迟禁用掉了,而禁用4秒是因为jvm确认4秒之后jvm虚拟机就启动完毕了。

偏向延迟(延迟4秒钟)

  • jvm自己用了很多synchronized关键字,不可能存在偏向锁,撤销偏向锁性能很耗性能,所以默认关闭,4s后jvm肯定启动完成,到时候是否用偏向锁由用户自己决定。

* 关闭延迟开启偏向锁(很多项目默认关闭偏向延迟)
* -XX:BiasedLockingStartupDelay=0
* 禁止偏向锁
* -XX:-UseBiasedLocking
* 启用偏向锁
* -XX:+UseBiasedLocking

slow_enter方法

  • void ObjectSynchronizer::slow_enter(Handle obj, BasicLock* lock, TRAPS) {
      markOop mark = obj->mark();
      assert(!mark->has_bias_pattern(), "should not see bias pattern here");
    
      // 判断是不是无锁
      if (mark->is_neutral()) {
        // Anticipate successful CAS -- the ST of the displaced mark must
        // be visible <= the ST performed by the CAS.
        // 重置lock record和mark word
        lock->set_displaced_header(mark);
        if (mark == (markOop) Atomic::cmpxchg_ptr(lock, obj()->mark_addr(), mark)) {
          TEVENT (slow_enter: release stacklock) ;
          return ;
        }
        // Fall through to inflate() ...
      } else
      if (mark->has_locker() && THREAD->is_lock_owned((address)mark->locker())) {
        assert(lock != mark->locker(), "must not re-lock the same lock");
        assert(lock != (BasicLock*)obj->mark(), "don't relock with same BasicLock");
        lock->set_displaced_header(NULL);
        return;
      }
    
    #if 0
      // The following optimization isn't particularly useful.
      if (mark->has_monitor() && mark->monitor()->is_entered(THREAD)) {
        lock->set_displaced_header (NULL) ;
        return ;
      }
    #endif
    
      // The object header will never be displaced to this lock,
      // so it does not matter what the value is, except that it
      // must be non-zero to avoid looking like a re-entrant lock,
      // and must not look locked either.
      lock->set_displaced_header(markOopDesc::unused_mark());
      // 调用inflate膨胀成重量锁
      // 再调用enter方法
      ObjectSynchronizer::inflate(THREAD, obj())->enter(THREAD);
    }
    

objectMonitor.cpp—enter()方法

  • void ATTR ObjectMonitor::enter(TRAPS) {
      // The following code is ordered to check the most common cases first
      // and to reduce RTS->RTO cache line upgrades on SPARC and IA32 processors.
      Thread * const Self = THREAD ;
      void * cur ;
      
      // 看当前持有锁的线程是否为null 直接获取锁 非公平
      cur = Atomic::cmpxchg_ptr (Self, &_owner, NULL) ;
      // 如果为null
      if (cur == NULL) {
         // Either ASSERT _recursions == 0 or explicitly set _recursions = 0.
         assert (_recursions == 0   , "invariant") ;
         // _owner是表示当前持有锁的线程,等价于reentrantlock中exclusiveThread,此时mark word指向objectMonitor这个对象
         // objectMonitor当中维护了一个队列
         // 所以这句话的意思就跟非公平锁一开始直接拿锁是一模一样的
         assert (_owner      == Self, "invariant") ;
         // CONSIDER: set or assert OwnerIsThread == 1
         return ;
      }
    
      // 重入
      if (cur == Self) {
         // TODO-FIXME: check for integer overflow!  BUGID 6557169.
         _recursions ++ ;
         return ;
      }
    
      if (Self->is_lock_owned ((address)cur)) {
        assert (_recursions == 0, "internal state error");
        _recursions = 1 ;
        // Commute owner from a thread-specific on-stack BasicLockObject address to
        // a full-fledged "Thread *".
        _owner = Self ;
        OwnerIsThread = 1 ;
        return ;
      }
    
      // We've encountered genuine contention.
      assert (Self->_Stalled == 0, "invariant") ;
      Self->_Stalled = intptr_t(this) ;
    
      // Try one round of spinning *before* enqueueing Self
      // and before going through the awkward and expensive state
      // transitions.  The following spin is strictly optional ...
      // Note that if we acquire the monitor from an initial spin
      // we forgo posting JVMTI events and firing DTRACE probes.
        
      // TrySpin是尝试自旋
      // 所以synchronized的重量锁部分是有自旋的
      if (Knob_SpinEarly && TrySpin (Self) > 0) {
         assert (_owner == Self      , "invariant") ;
         assert (_recursions == 0    , "invariant") ;
         assert (((oop)(object()))->mark() == markOopDesc::encode(this), "invariant") ;
         Self->_Stalled = 0 ;
         return ;
      }
    
      assert (_owner != Self          , "invariant") ;
      assert (_succ  != Self          , "invariant") ;
      assert (Self->is_Java_thread()  , "invariant") ;
      JavaThread * jt = (JavaThread *) Self ;
      assert (!SafepointSynchronize::is_at_safepoint(), "invariant") ;
      assert (jt->thread_state() != _thread_blocked   , "invariant") ;
      assert (this->object() != NULL  , "invariant") ;
      assert (_count >= 0, "invariant") ;
    
      // Prevent deflation at STW-time.  See deflate_idle_monitors() and is_busy().
      // Ensure the object-monitor relationship remains stable while there's contention.
      Atomic::inc_ptr(&_count);
    
      JFR_ONLY(JfrConditionalFlushWithStacktrace<EventJavaMonitorEnter> flush(jt);)
      EventJavaMonitorEnter event;
      if (event.should_commit()) {
        event.set_monitorClass(((oop)this->object())->klass());
        event.set_address((uintptr_t)(this->object_addr()));
      }
    
      { // Change java thread status to indicate blocked on monitor enter.
        JavaThreadBlockedOnMonitorEnterState jtbmes(jt, this);
    
        Self->set_current_pending_monitor(this);
    
        DTRACE_MONITOR_PROBE(contended__enter, this, object(), jt);
        if (JvmtiExport::should_post_monitor_contended_enter()) {
          JvmtiExport::post_monitor_contended_enter(jt, this);
    
          // The current thread does not yet own the monitor and does not
          // yet appear on any queues that would get it made the successor.
          // This means that the JVMTI_EVENT_MONITOR_CONTENDED_ENTER event
          // handler cannot accidentally consume an unpark() meant for the
          // ParkEvent associated with this ObjectMonitor.
        }
    
        OSThreadContendState osts(Self->osthread());
        ThreadBlockInVM tbivm(jt);
    
        // TODO-FIXME: change the following for(;;) loop to straight-line code.
        // 又开始自旋
        for (;;) {
          jt->set_suspend_equivalent();
          // cleared by handle_special_suspend_equivalent_condition()
          // or java_suspend_self()
         
          // 跟mutex关联了
          EnterI (THREAD) ;
    
          if (!ExitSuspendEquivalent(jt)) break ;
    
          //
          // We have acquired the contended monitor, but while we were
          // waiting another thread suspended us. We don't want to enter
          // the monitor while suspended because that would surprise the
          // thread that suspended us.
          //
              _recursions = 0 ;
          _succ = NULL ;
          exit (false, Self) ;
    
          jt->java_suspend_self();
        }
        Self->set_current_pending_monitor(NULL);
    
        // We cleared the pending monitor info since we've just gotten past
        // the enter-check-for-suspend dance and we now own the monitor free
        // and clear, i.e., it is no longer pending. The ThreadBlockInVM
        // destructor can go to a safepoint at the end of this block. If we
        // do a thread dump during that safepoint, then this thread will show
        // as having "-locked" the monitor, but the OS and java.lang.Thread
        // states will still report that the thread is blocked trying to
        // acquire it.
      }
    
      Atomic::dec_ptr(&_count);
      assert (_count >= 0, "invariant") ;
      Self->_Stalled = 0 ;
    
      // Must either set _recursions = 0 or ASSERT _recursions == 0.
      assert (_recursions == 0     , "invariant") ;
      assert (_owner == Self       , "invariant") ;
      assert (_succ  != Self       , "invariant") ;
      assert (((oop)(object()))->mark() == markOopDesc::encode(this), "invariant") ;
    
      // The thread -- now the owner -- is back in vm mode.
      // Report the glorious news via TI,DTrace and jvmstat.
      // The probe effect is non-trivial.  All the reportage occurs
      // while we hold the monitor, increasing the length of the critical
      // section.  Amdahl's parallel speedup law comes vividly into play.
      //
      // Another option might be to aggregate the events (thread local or
      // per-monitor aggregation) and defer reporting until a more opportune
      // time -- such as next time some thread encounters contention but has
      // yet to acquire the lock.  While spinning that thread could
      // spinning we could increment JVMStat counters, etc.
    
      DTRACE_MONITOR_PROBE(contended__entered, this, object(), jt);
      if (JvmtiExport::should_post_monitor_contended_entered()) {
        JvmtiExport::post_monitor_contended_entered(jt, this);
    
        // The current thread already owns the monitor and is not going to
        // call park() for the remainder of the monitor enter protocol. So
        // it doesn't matter if the JVMTI_EVENT_MONITOR_CONTENDED_ENTERED
        // event handler consumed an unpark() issued by the thread that
        // just exited the monitor.
      }
    
      if (event.should_commit()) {
        event.set_previousOwner((uintptr_t)_previous_owner_tid);
        event.commit();
      }
    
      if (ObjectMonitor::_sync_ContendedLockAttempts != NULL) {
         ObjectMonitor::_sync_ContendedLockAttempts->inc() ;
      }
    }
    
    • EnterI (THREAD)
      void ATTR ObjectMonitor::EnterI (TRAPS) {
          Thread * Self = THREAD ;
          assert (Self->is_Java_thread(), "invariant") ;
          assert (((JavaThread *) Self)->thread_state() == _thread_blocked   , "invariant") ;
      
          // Try the lock - TATAS
          if (TryLock (Self) > 0) {
              assert (_succ != Self              , "invariant") ;
              assert (_owner == Self             , "invariant") ;
              assert (_Responsible != Self       , "invariant") ;
              return ;
          }
      
          DeferredInitialize () ;
      
          // We try one round of spinning *before* enqueueing Self.
          //
          // If the _owner is ready but OFFPROC we could use a YieldTo()
          // operation to donate the remainder of this thread's quantum
          // to the owner.  This has subtle but beneficial affinity
          // effects.
      
          if (TrySpin (Self) > 0) {
              assert (_owner == Self        , "invariant") ;
              assert (_succ != Self         , "invariant") ;
              assert (_Responsible != Self  , "invariant") ;
              return ;
          }
      
          // The Spin failed -- Enqueue and park the thread ...
          assert (_succ  != Self            , "invariant") ;
          assert (_owner != Self            , "invariant") ;
          assert (_Responsible != Self      , "invariant") ;
      
          // Enqueue "Self" on ObjectMonitor's _cxq.
          //
          // Node acts as a proxy for Self.
          // As an aside, if were to ever rewrite the synchronization code mostly
          // in Java, WaitNodes, ObjectMonitors, and Events would become 1st-class
          // Java objects.  This would avoid awkward lifecycle and liveness issues,
          // as well as eliminate a subset of ABA issues.
          // TODO: eliminate ObjectWaiter and enqueue either Threads or Events.
          //
          // 将线程封装为node
          ObjectWaiter node(Self) ;
          Self->_ParkEvent->reset() ;
          // 入队
          node._prev   = (ObjectWaiter *) 0xBAD ;
          node.TState  = ObjectWaiter::TS_CXQ ;
      
          // Push "Self" onto the front of the _cxq.
          // Once on cxq/EntryList, Self stays on-queue until it acquires the lock.
          // Note that spinning tends to reduce the rate at which threads
          // enqueue and dequeue on EntryList|cxq.
          ObjectWaiter * nxt ;
          // 死循环
          for (;;) {
              node._next = nxt = _cxq ;
              if (Atomic::cmpxchg_ptr (&node, &_cxq, nxt) == nxt) break ;
      
              // Interference - the CAS failed because _cxq changed.  Just retry.
              // As an optional optimization we retry the lock.
              // 尝试拿锁
              if (TryLock (Self) > 0) {
                  assert (_succ != Self         , "invariant") ;
                  assert (_owner == Self        , "invariant") ;
                  assert (_Responsible != Self  , "invariant") ;
                  return ;
              }
          }
      
          // Check for cxq|EntryList edge transition to non-null.  This indicates
          // the onset of contention.  While contention persists exiting threads
          // will use a ST:MEMBAR:LD 1-1 exit protocol.  When contention abates exit
          // operations revert to the faster 1-0 mode.  This enter operation may interleave
          // (race) a concurrent 1-0 exit operation, resulting in stranding, so we
          // arrange for one of the contending thread to use a timed park() operations
          // to detect and recover from the race.  (Stranding is form of progress failure
          // where the monitor is unlocked but all the contending threads remain parked).
          // That is, at least one of the contended threads will periodically poll _owner.
          // One of the contending threads will become the designated "Responsible" thread.
          // The Responsible thread uses a timed park instead of a normal indefinite park
          // operation -- it periodically wakes and checks for and recovers from potential
          // strandings admitted by 1-0 exit operations.   We need at most one Responsible
          // thread per-monitor at any given moment.  Only threads on cxq|EntryList may
          // be responsible for a monitor.
          //
          // Currently, one of the contended threads takes on the added role of "Responsible".
          // A viable alternative would be to use a dedicated "stranding checker" thread
          // that periodically iterated over all the threads (or active monitors) and unparked
          // successors where there was risk of stranding.  This would help eliminate the
          // timer scalability issues we see on some platforms as we'd only have one thread
          // -- the checker -- parked on a timer.
      
          if ((SyncFlags & 16) == 0 && nxt == NULL && _EntryList == NULL) {
              // Try to assume the role of responsible thread for the monitor.
              // CONSIDER:  ST vs CAS vs { if (Responsible==null) Responsible=Self }
              Atomic::cmpxchg_ptr (Self, &_Responsible, NULL) ;
          }
      
          // The lock have been released while this thread was occupied queueing
          // itself onto _cxq.  To close the race and avoid "stranding" and
          // progress-liveness failure we must resample-retry _owner before parking.
          // Note the Dekker/Lamport duality: ST cxq; MEMBAR; LD Owner.
          // In this case the ST-MEMBAR is accomplished with CAS().
          //
          // TODO: Defer all thread state transitions until park-time.
          // Since state transitions are heavy and inefficient we'd like
          // to defer the state transitions until absolutely necessary,
          // and in doing so avoid some transitions ...
      
          TEVENT (Inflated enter - Contention) ;
          int nWakeups = 0 ;
          int RecheckInterval = 1 ;
      
          for (;;) {
      		
              // 再次尝试拿锁
              if (TryLock (Self) > 0) break ;
              assert (_owner != Self, "invariant") ;
      
              if ((SyncFlags & 2) && _Responsible == NULL) {
                 Atomic::cmpxchg_ptr (Self, &_Responsible, NULL) ;
              }
      
              // park self
              if (_Responsible == Self || (SyncFlags & 1)) {
                  TEVENT (Inflated enter - park TIMED) ;
                  Self->_ParkEvent->park ((jlong) RecheckInterval) ;
                  // Increase the RecheckInterval, but clamp the value.
                  RecheckInterval *= 8 ;
                  if (RecheckInterval > 1000) RecheckInterval = 1000 ;
              } else {
                  TEVENT (Inflated enter - park UNTIMED) ;
                  // 拿不到锁之后ParkEvent->park()
                  Self->_ParkEvent->park() ;
              }
      
              if (TryLock(Self) > 0) break ;
      
              // The lock is still contested.
              // Keep a tally of the # of futile wakeups.
              // Note that the counter is not protected by a lock or updated by atomics.
              // That is by design - we trade "lossy" counters which are exposed to
              // races during updates for a lower probe effect.
              TEVENT (Inflated enter - Futile wakeup) ;
              if (ObjectMonitor::_sync_FutileWakeups != NULL) {
                 ObjectMonitor::_sync_FutileWakeups->inc() ;
              }
              ++ nWakeups ;
      
              // Assuming this is not a spurious wakeup we'll normally find _succ == Self.
              // We can defer clearing _succ until after the spin completes
              // TrySpin() must tolerate being called with _succ == Self.
              // Try yet another round of adaptive spinning.
              if ((Knob_SpinAfterFutile & 1) && TrySpin (Self) > 0) break ;
      
              // We can find that we were unpark()ed and redesignated _succ while
              // we were spinning.  That's harmless.  If we iterate and call park(),
              // park() will consume the event and return immediately and we'll
              // just spin again.  This pattern can repeat, leaving _succ to simply
              // spin on a CPU.  Enable Knob_ResetEvent to clear pending unparks().
              // Alternately, we can sample fired() here, and if set, forgo spinning
              // in the next iteration.
      
              if ((Knob_ResetEvent & 1) && Self->_ParkEvent->fired()) {
                 Self->_ParkEvent->reset() ;
                 OrderAccess::fence() ;
              }
              if (_succ == Self) _succ = NULL ;
      
              // Invariant: after clearing _succ a thread *must* retry _owner before parking.
              OrderAccess::fence() ;
          }
      
          // Egress :
          // Self has acquired the lock -- Unlink Self from the cxq or EntryList.
          // Normally we'll find Self on the EntryList .
          // From the perspective of the lock owner (this thread), the
          // EntryList is stable and cxq is prepend-only.
          // The head of cxq is volatile but the interior is stable.
          // In addition, Self.TState is stable.
      
          assert (_owner == Self      , "invariant") ;
          assert (object() != NULL    , "invariant") ;
          // I'd like to write:
          //   guarantee (((oop)(object()))->mark() == markOopDesc::encode(this), "invariant") ;
          // but as we're at a safepoint that's not safe.
      
          UnlinkAfterAcquire (Self, &node) ;
          if (_succ == Self) _succ = NULL ;
      
          assert (_succ != Self, "invariant") ;
          if (_Responsible == Self) {
              _Responsible = NULL ;
              OrderAccess::fence(); // Dekker pivot-point
      
              // We may leave threads on cxq|EntryList without a designated
              // "Responsible" thread.  This is benign.  When this thread subsequently
              // exits the monitor it can "see" such preexisting "old" threads --
              // threads that arrived on the cxq|EntryList before the fence, above --
              // by LDing cxq|EntryList.  Newly arrived threads -- that is, threads
              // that arrive on cxq after the ST:MEMBAR, above -- will set Responsible
              // non-null and elect a new "Responsible" timer thread.
              //
              // This thread executes:
              //    ST Responsible=null; MEMBAR    (in enter epilog - here)
              //    LD cxq|EntryList               (in subsequent exit)
              //
              // Entering threads in the slow/contended path execute:
              //    ST cxq=nonnull; MEMBAR; LD Responsible (in enter prolog)
              //    The (ST cxq; MEMBAR) is accomplished with CAS().
              //
              // The MEMBAR, above, prevents the LD of cxq|EntryList in the subsequent
              // exit operation from floating above the ST Responsible=null.
          }
      
          // We've acquired ownership with CAS().
          // CAS is serializing -- it has MEMBAR/FENCE-equivalent semantics.
          // But since the CAS() this thread may have also stored into _succ,
          // EntryList, cxq or Responsible.  These meta-data updates must be
          // visible __before this thread subsequently drops the lock.
          // Consider what could occur if we didn't enforce this constraint --
          // STs to monitor meta-data and user-data could reorder with (become
          // visible after) the ST in exit that drops ownership of the lock.
          // Some other thread could then acquire the lock, but observe inconsistent
          // or old monitor meta-data and heap data.  That violates the JMM.
          // To that end, the 1-0 exit() operation must have at least STST|LDST
          // "release" barrier semantics.  Specifically, there must be at least a
          // STST|LDST barrier in exit() before the ST of null into _owner that drops
          // the lock.   The barrier ensures that changes to monitor meta-data and data
          // protected by the lock will be visible before we release the lock, and
          // therefore before some other thread (CPU) has a chance to acquire the lock.
          // See also: http://gee.cs.oswego.edu/dl/jmm/cookbook.html.
          //
          // Critically, any prior STs to _succ or EntryList must be visible before
          // the ST of null into _owner in the *subsequent* (following) corresponding
          // monitorexit.  Recall too, that in 1-0 mode monitorexit does not necessarily
          // execute a serializing instruction.
      
          if (SyncFlags & 8) {
             OrderAccess::fence() ;
          }
          return ;
      }
      
    • _ParkEvent->park()
      park.hpp代码下面
      ParkEvent() : PlatformEvent() {
          AssociatedWith = NULL ;
          FreeNext       = NULL ;
          ListNext       = NULL ;
          ListPrev       = NULL ;
          OnList         = 0 ;
          TState         = 0 ;
          Notified       = 0 ;
          IsWaiting      = 0 ;
      }
      
    • PlatformEvent

      os_linux.cpp(jdk8) os_pocix.cpp(jdk12)
      void os::PlatformEvent::park() {       // AKA "down()"
        // Invariant: Only the thread associated with the Event/PlatformEvent
        // may call park().
        // TODO: assert that _Assoc != NULL or _Assoc == Self
        int v ;
        for (;;) {
            v = _Event ;
            if (Atomic::cmpxchg (v-1, &_Event, v) == v) break ;
        }
        guarantee (v >= 0, "invariant") ;
        if (v == 0) {
           // Do this the hard way by blocking ...
           // 用到了mutex锁
           int status = pthread_mutex_lock(_mutex);
           assert_status(status == 0, status, "mutex_lock");
           guarantee (_nParked == 0, "invariant") ;
           ++ _nParked ;
           while (_Event < 0) {
              status = pthread_cond_wait(_cond, _mutex);
              // for some reason, under 2.7 lwp_cond_wait() may return ETIME ...
              // Treat this the same as if the wait was interrupted
              if (status == ETIME) { status = EINTR; }
              assert_status(status == 0 || status == EINTR, status, "cond_wait");
           }
           -- _nParked ;
      
          _Event = 0 ;
           status = pthread_mutex_unlock(_mutex);
           assert_status(status == 0, status, "mutex_unlock");
          // Paranoia to ensure our locked and lock-free paths interact
          // correctly with each other.
          OrderAccess::fence();
        }
        guarantee (_Event >= 0, "invariant") ;
      }
      

      park中使用到了pthread_mutex_lock,即互斥量mutex锁

ReentrantLock中Locksupport.park()方法

  • 底层调用的是os_linux.cpp下面的Parker::park

  • void Parker::park(bool isAbsolute, jlong time) {
      // Ideally we'd do something useful while spinning, such
      // as calling unpackTime().
    
      // Optional fast-path check:
      // Return immediately if a permit is available.
      // We depend on Atomic::xchg() having full barrier semantics
      // since we are doing a lock-free update to _counter.
      if (Atomic::xchg(0, &_counter) > 0) return;
    
      Thread* thread = Thread::current();
      assert(thread->is_Java_thread(), "Must be JavaThread");
      JavaThread *jt = (JavaThread *)thread;
    
      // Optional optimization -- avoid state transitions if there's an interrupt pending.
      // Check interrupt before trying to wait
      if (Thread::is_interrupted(thread, false)) {
        return;
      }
    
      // Next, demultiplex/decode time arguments
      timespec absTime;
      if (time < 0 || (isAbsolute && time == 0) ) { // don't wait at all
        return;
      }
      if (time > 0) {
        unpackTime(&absTime, isAbsolute, time);
      }
    
    
      // Enter safepoint region
      // Beware of deadlocks such as 6317397.
      // The per-thread Parker:: mutex is a classic leaf-lock.
      // In particular a thread must never block on the Threads_lock while
      // holding the Parker:: mutex.  If safepoints are pending both the
      // the ThreadBlockInVM() CTOR and DTOR may grab Threads_lock.
      ThreadBlockInVM tbivm(jt);
    
      // Don't wait if cannot get lock since interference arises from
      // unblocking.  Also. check interrupt before trying wait
      if (Thread::is_interrupted(thread, false) || pthread_mutex_trylock(_mutex) != 0) {
        return;
      }
    
      int status ;
      if (_counter > 0)  { // no wait needed
        _counter = 0;
        status = pthread_mutex_unlock(_mutex);
        assert (status == 0, "invariant") ;
        // Paranoia to ensure our locked and lock-free paths interact
        // correctly with each other and Java-level accesses.
        OrderAccess::fence();
        return;
      }
    
    #ifdef ASSERT
      // Don't catch signals while blocked; let the running threads have the signals.
      // (This allows a debugger to break into the running thread.)
      sigset_t oldsigs;
      sigset_t* allowdebug_blocked = os::Linux::allowdebug_blocked_signals();
      pthread_sigmask(SIG_BLOCK, allowdebug_blocked, &oldsigs);
    #endif
    
      OSThreadWaitState osts(thread->osthread(), false /* not Object.wait() */);
      jt->set_suspend_equivalent();
      // cleared by handle_special_suspend_equivalent_condition() or java_suspend_self()
    
      assert(_cur_index == -1, "invariant");
      if (time == 0) {
        _cur_index = REL_INDEX; // arbitrary choice when not timed
        status = pthread_cond_wait (&_cond[_cur_index], _mutex) ;
      } else {
        _cur_index = isAbsolute ? ABS_INDEX : REL_INDEX;
        status = os::Linux::safe_cond_timedwait (&_cond[_cur_index], _mutex, &absTime) ;
        if (status != 0 && WorkAroundNPTLTimedWaitHang) {
          pthread_cond_destroy (&_cond[_cur_index]) ;
          pthread_cond_init    (&_cond[_cur_index], isAbsolute ? NULL : os::Linux::condAttr());
        }
      }
      _cur_index = -1;
      assert_status(status == 0 || status == EINTR ||
                    status == ETIME || status == ETIMEDOUT,
                    status, "cond_timedwait");
    
    #ifdef ASSERT
      pthread_sigmask(SIG_SETMASK, &oldsigs, NULL);
    #endif
    
      _counter = 0 ;
      status = pthread_mutex_unlock(_mutex) ;
      assert_status(status == 0, status, "invariant") ;
      // Paranoia to ensure our locked and lock-free paths interact
      // correctly with each other and Java-level accesses.
      OrderAccess::fence();
    
      // If externally suspended while waiting, re-suspend
      if (jt->handle_special_suspend_equivalent_condition()) {
        jt->java_suspend_self();
      }
    }
    

轻量锁加锁代码—一直延伸到重量锁、mutex

InterpreterRuntime.cpp(轻量锁加锁代码)

  • 膨胀成轻量锁不是立马执行,首先会去判断是否任然开启了偏向模式,是为了去做偏向锁的撤销,

  • slow_enter

    • 首先判断是否无锁(is_neutrion)

    • inflate,去膨胀成一把重量锁,膨胀完后会调用enter(objectMoniter.cpp)方法,

      • 是否看持有线程是否为null

      • 如果为null,维护了一个队列,有可能还没有去唤醒队列,直接就去获取锁了,把owner指向(等同于reentrantLock中exclusiveOwnerThread)

      • TrySpin(自旋)

      • EnterI(Thread),与mutex关联起来,

        • self->parkevent->park(继承了platformEvent)(os_linux.cpp(jdk8) os_pocix.cpp(jdk12))
          • park中有pthread_mutex_lock
          • jdk中lockSupport.park()调用的是Parker * FreeNext,两者都是调用了mutex,是重量锁,会产生内核态的切换
        • parkevent与平台有关,
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值