Java Concurrency in Practice reading Notes (1) - Thread Safety

 

 

 

Chapter 2. Thread Safety

 

Whether an object needs to be thread-safe depends on whether it will be accessed from multiple threads. This is a property of how the object is used in a program, not what it does


Whenever more than one thread accesses a given state variable, and one of them might write to it, they all must coordinate their access to it using synchronization. The primary mechanism for synchronization in Java is thesynchronized keyword, which provides exclusive locking, but the term "synchronization" also includes the use ofvolatile variables, explicit locks, and atomic variables.



If multiple threads access the same mutable state variable without appropriate synchronization, your program is broken. There are three ways to fix it:

  • Don't share the state variable across threads;

  • Make the state variable immutable; or

  • Use synchronization whenever accessing the state variable

 


When designing thread-safe classes, good object-oriented techniques encapsulation, immutability, and clear specification of invariants are your best friends.


it is always a good practice first to make your code right, and then make it fast.


2.1. What is Thread Safety?

A class is thread-safe if it behaves correctly when accessed from multiple threads, regardless of the scheduling or interleaving of the execution of those threads by the runtime environment, and with no additional synchronization or other coordination on the part of the calling code.

 

stateless: it has no fields and references no fields from other classes. 

 

Stateless objects are always thread-safe. Since the actions of a thread accessing a stateless object cannot affect the correctness of operations in other threads, stateless objects are thread-safe.

 

A Stateless Servlet.

 

@ThreadSafe
public class StatelessFactorizer implements Servlet {
    public void service(ServletRequest req, ServletResponse resp) {
        BigInteger i = extractFromRequest(req);
        BigInteger[] factors = factor(i);
        encodeIntoResponse(resp, factors);
    }
}

 


 

2.2. Atomicity

 

While the increment operation,++countmay look like a single action because of its compact syntax, it is not atomic, which means that it does not execute as a single, indivisible operation. Instead, it is a shorthand for a sequence of three discrete operations: fetch the current value, add one to it, and write the new value back. 

 

 

A race condition occurs when the correctness of a computation depends on the relative timing or interleaving of multiple threads by the runtime; in other words, when getting the right answer relies on lucky timing. [4] The most common type of race condition ischeck-then-act, where a potentially stale observation is used to make a decision on what to do next.


The term race condition is often confused with the related term data race, which arises when synchronization is not used to coordinate all access to a shared nonfinal field. You risk a data race whenever a thread writes a variable that might next be read by another thread or reads a variable that might have last been written by another thread if both threads do not use synchronization; code with data races has no useful defined semantics under the Java Memory Model. Not all race conditions are data races, and not all data races are race conditions, but they both can cause concurrent programs to fail in unpredictable ways.


 

Listing 2.3. Race Condition in Lazy Initialization. Don't Do this.

 

@NotThreadSafe
public class LazyInitRace {
    private ExpensiveObject instance = null;

    public ExpensiveObject getInstance() {
        if (instance == null)
            instance = new ExpensiveObject();
        return instance;
    }
}

 

 

 

LazyInitRace has race conditions that can undermine its correctness. Say that threads A and B execute getInstanceat the same time. A sees that instance is null, and instantiates a new ExpensiveObjectB also checks if instance isnull. Whether instance is null at this point depends unpredictably on timing, including the vagaries of scheduling and how long A takes to instantiate the ExpensiveObject and set the instance field. If instance is null when Bexamines it, the two callers to getInstance may receive two different results, even though getInstance is always supposed to return the same instance.

 

 

Operations A and B are atomic with respect to each other if, from the perspective of a thread executingA, when another thread executes B, either all of B has executed or none of it has. An atomic operation is one that is atomic with respect to all operations, including itself, that operate on the same state.

 

 

2.2.3. Compound Actions


 We refer collectively to check-then-act and read-modify-write sequences as compound actions: sequences of operations that must be executed atomically in order to remain thread-safe.

 

 

Listing 2.4. Servlet that Counts Requests Using AtomicLong.

 

 

@ThreadSafe
public class CountingFactorizer implements Servlet {
    private final AtomicLong count = new AtomicLong(0);

    public long getCount() { return count.get(); }

    public void service(ServletRequest req, ServletResponse resp) {
        BigInteger i = extractFromRequest(req);
        BigInteger[] factors = factor(i);
        count.incrementAndGet();
        encodeIntoResponse(resp, factors);
    }
}

 

 

 

 

The java.util.concurrent.atomic package contains atomic variable classes for effecting atomic state transitions on numbers and object references. By replacing the long counter with an AtomicLong, we ensure that all actions that access the counter state are atomic. 

 

Where practical, use existing thread-safe objects, like AtomicLong, to manage your class's state. It is simpler to reason about the possible states and state transitions for existing thread-safe objects than it is for arbitrary state variables, and this makes it easier to maintain and verify thread safety.

 

 

 

2.3. Locking

 

Listing 2.5. Servlet that Attempts to Cache its Last Result without Adequate Atomicity. Don't Do this.

 

 

 


@NotThreadSafe
public class UnsafeCachingFactorizer implements Servlet {
     private final AtomicReference<BigInteger> lastNumber
         = new AtomicReference<BigInteger>();
     private final AtomicReference<BigInteger[]>  lastFactors
         = new AtomicReference<BigInteger[]>();

     public void service(ServletRequest req, ServletResponse resp) {
         BigInteger i = extractFromRequest(req);
         if (i.equals(lastNumber.get()))
             encodeIntoResponse(resp,  lastFactors.get() );
         else {
             BigInteger[] factors = factor(i);
             lastNumber.set(i);
             lastFactors.set(factors);
             encodeIntoResponse(resp, factors);
         }
     }
}

 

 

The definition of thread safety requires that invariants be preserved regardless of timing or interleaving of operations in multiple threads.


To preserve state consistency, update related state variables in a single atomic operation.


 

2.3.1. Intrinsic Locks

 

 Asynchronized block has two parts: a reference to an object that will serve as the lock, and a block of code to be guarded by that lock. synchronized method is a shorthand for a synchronized block that spans an entire method body, and whose lock is the object on which the method is being invoked.

 

Intrinsic locks in Java act as mutexes (or mutual exclusion locks), which means that at most one thread may own the lock. When thread A attempts to acquire a lock held by thread BA must wait, or block, until B releases it. If B never releases the lock, A waits forever.

 

In the context of concurrency, atomicity means the same thing as it does in transactional applicationsthat a group of statements appear to execute as a single, indivisible unit.

 

 

Listing 2.6. Servlet that Caches Last Result, But with Unnacceptably Poor Concurrency. Don't Do this.

 

 

@ThreadSafe
public class SynchronizedFactorizer implements Servlet {
    @GuardedBy("this") private BigInteger lastNumber;
    @GuardedBy("this") private BigInteger[] lastFactors;

    public synchronized void service(ServletRequest req,
                                     ServletResponse resp) {
        BigInteger i = extractFromRequest(req);
        if (i.equals(lastNumber))
            encodeIntoResponse(resp, lastFactors);
        else {
            BigInteger[] factors = factor(i);
            lastNumber = i;
            lastFactors = factors;
            encodeIntoResponse(resp, factors);
        }
    }
}

 

 

 

Figure 2.1. Poor Concurrency of SynchronizedFactorizer.

 

 

The machinery of synchronization makes it easy to restore thread safety to the factoring servlet. Listing 2.6 makes theservice method synchronized, so only one thread may enter service at a time. SynchronizedFactorizer is now thread-safe; however, this approach is fairly extreme, since it inhibits multiple clients from using the factoring servlet simultaneously at allresulting in unacceptably poor responsiveness. This problemwhich is a performance problem, not a thread safety problemis addressed in Section 2.5.

 

 

2.3.2. Reentrancy
Reentrancy is implemented by associating with each lock an acquisition count and an owning thread. When the count is zero, the lock is considered unheld. When a thread acquires a previously unheld lock, the JVM records the owner and sets the acquisition count to one. If that same thread acquires the lock again, the count is incremented, and when the owning thread exits thesynchronized block, the count is decremented. When the count reaches zero, the lock is released.

 

 

 

 

Listing 2.7. Code that would Deadlock if Intrinsic Locks were Not Reentrant.

 

 

public class Widget {
    public synchronized void doSomething() {
        ...
    }
}

public class LoggingWidget extends Widget {
    public synchronized void doSomething() {
        System.out.println(toString() + ": calling doSomething");
        super.doSomething();
    }
}

 

 

 

Without reentrant locks, the very natural-looking code in Listing 2.7, in which a subclass overrides asynchronized method and then calls the superclass method, would deadlock. Because the doSomething methods inWidget and LoggingWidget are both synchronized, each tries to acquire the lock on the Widget before proceeding. But if intrinsic locks were not reentrant, the call to super.doSomething would never be able to acquire the lock because it would be considered already held, and the thread would permanently stall waiting for a lock it can never acquire. Reentrancy saves us from deadlock in situations like this.


 

 

2.4. Guarding State with Locks

 

 Serializing access to an object has nothing to do with object serialization (turning an object into a byte stream); serializing access means that threads take turns accessing the object exclusively, rather than doing so concurrently.


For each mutable state variable that may be accessed by more than one thread, all accesses to that variable must be performed with the same lock held. In this case, we say that the variable is guarded bythat lock.

 

Every shared, mutable variable should be guarded by exactly one lock. Make it clear to maintainers which lock that is.

 

A common locking convention is to encapsulate all mutable state within an object and to protect it from concurrent access by synchronizing any code path that accesses mutable state using the object's intrinsic lock. This pattern is used by many thread-safe classes, such as Vector and other synchronized collection classes.

 

 Code auditing tools like FindBugs can identify when a variable is frequently but not always accessed with a lock held, which may indicate a bug.

 

For every invariant that involves more than one variable, all the variables involved in that invariant must be guarded by the same lock.

 

 

2.5. Liveness and Performance

Listing 2.8. Servlet that Caches its Last Request and Result.

 

 

@ThreadSafe
public class CachedFactorizer implements Servlet {
    @GuardedBy("this") private BigInteger lastNumber;
    @GuardedBy("this") private BigInteger[] lastFactors;
    @GuardedBy("this") private long hits;
    @GuardedBy("this") private long cacheHits;

    public synchronized long getHits() { return hits; }
    public synchronized double getCacheHitRatio() {
        return (double) cacheHits / (double) hits;
    }

    public void service(ServletRequest req, ServletResponse resp) {
        BigInteger i = extractFromRequest(req);
        BigInteger[] factors = null;
        synchronized (this) {
            ++hits;
            if (i.equals(lastNumber)) {
                ++cacheHits;
                factors = lastFactors.clone();
            }
        }
        if (factors == null) {
            factors = factor(i);
            synchronized (this)  {
                lastNumber = i;
                lastFactors = factors.clone();
            }
        }
        encodeIntoResponse(resp, factors);
    }
}

 

 

The restructuring of CachedFactorizer provides a balance between simplicity (synchronizing the entire method) and concurrency (synchronizing the shortest possible code paths). Acquiring and releasing a lock has some overhead, so it is undesirable to break down synchronized blocks too far (such as factoring ++hits into its own synchronized block), even if this would not compromise atomicity. CachedFactorizer holds the lock when accessing state variables and for the duration of compound actions, but releases it before executing the potentially long-running factorization operation. This preserves thread safety without unduly affecting concurrency; the code paths in each of thesynchronized blocks are "short enough".

 


There is frequently a tension between simplicity and performance. When implementing a synchronization policy, resist the temptation to prematurely sacriflce simplicity (potentially compromising safety) for the sake of performance.


Avoid holding locks during lengthy computations or operations at risk of not completing quickly such as network or console I/O.


 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值