Class Future
{
private volatile boolean ready;
private Object data;
public Object get()
{
if(!ready) return null;
return data;
}
public synchronized void setOnce(Object o)
{
if(ready) throw...;
data = o;
ready = true;
}
}
It said "if a thread reads data, there is a happens-before edge from write to read of that guarantees visibility of data"
I know from my learning:
volatile ensures that every read/write will be in the memory instead of only in cache or registers;
volatile ensures reorder: that is, in setOnce() method data = o can only be scheduled after if(ready) throw..., and before ready = true; this guarantee that if in get() ready = true, data must be o.
My confusion is
is it possible that when thread 1 is in setOnce(), reaches the point that after data = o; before ready = true; At the same time, thread 2 runs into get(), read ready is false, and return null. And thead 1 continues ready = true.
In this scenario, Thread 2 didn't see the new "data" even though data has been assigned new value in thread 1.
get() isn't synchronized, that means the synchronized lock cannot protect setOnce() since thread 1 calls get() that needn't acquire the lock to access variable ready, data. So thread are not guaranteed to see the latest value of data. By this, I mean lock only guarantee the visibility between synchronized blocks. Even though one thread is running synchronized block setOnce(), another thread is still can go into get() and access ready and data without blocking and may see the old value of these variables.
in get(), if ready = true, data must be o? I mean this thread is guaranteed to see the visibility of data? I think data is not a volatile nor the get() synchronized. Is this thread may see the old value in the cache?
Thanks!
解决方案
volatile ensures that every read/write will be in the memory instead of only in cache or registers;
Nope. It just ensures it's visible to other threads. On modern hardware, that doesn't require accessing memory. (Which is a good thing, main memory is slow.)
volatile ensures reorder: that is, in setOnce() method data = o can only be scheduled after if(ready) throw..., and before ready = true; this guarantee that if in get() ready = true, data must be o.
That's correct.
is it possible that when thread 1 is in setOnce(), reaches the point that after data = o; before ready = true; At the same time, thread 2 runs into get(), read ready is false, and return null. And thead 1 continues ready = true. In this scenario, Thread 2 didn't see the new "data" even though data has been assigned new value in thread 1.
Yes, but if that's a problem, then you shouldn't be using code like this. Presumably, the API for this code would be that get is guaranteed to see the result if called after setOnce returns. Obviously, you can't guarantee that get will see the result before we're finished making them.
get() isn't synchronized, that means the synchronized lock cannot protect setOnce() since thread 1 calls get() that needn't acquire the lock to access variable ready, data. So thread are not guaranteed to see the latest value of data. By this, I mean lock only guarantee the visibility between synchronized blocks. Even though one thread is running synchronized block setOnce(), another thread is still can go into get() and access ready and data without blocking and may see the old value of these variables.
No. And if this were true, synchronization would be almost impossible to use. For example, a common pattern is to create an object, then acquire the lock on a collection and add the object to the collection. This wouldn't work if acquiring the lock on the collection didn't guarantee that the writes involved in the creation of the object were visible.
in get(), if ready = true, data must be o? I mean this thread is guaranteed to see the visibility of data? I think data is not a volatile nor the get() synchronized. Is this thread may see the old value in the cache?
Java's volatile operation is defined such that a thread that sees a change to one is guaranteed to see all other memory changes the thread that made that change made before it made the change the thread saw. This is not true in other languages (such as C or C++). This may make Java's volatiles more expensive on some platforms, but fortunately not on typical platforms.
Also, please don't talk about "in the cache". This has nothing to do with caches. This is a common misunderstanding. It has to do with visibility, not caching. Most caches provide full visibility into the cache (punch "MESI protocol" into your favorite search engine to learn more) and don't require anything special to ensure visibility.