关于scala中lazy val的几个注意事项

Lazy Vals in Scala: A Look Under the Hood

02/24/16 by Markus Hauck

No Comments

Scala allows the special keyword lazy in front of val in order to change the val to one that is lazily initialized. While lazy initialization seems tempting at first, the concrete implementation of lazy vals in scalac has some subtle issues. This article takes a look under the hood and explains some of the pitfalls: we see how lazy initialization is implemented as well as scenarios, where a lazy val can crash your program, inhibit parallelism or have other unexpected behavior.

 

Introduction

This post was originally inspired by the talk Hands-on Dotty (slides) by Dmitry Petrashko, given at Scala World 2015. Dmitry gives a wonderful talk about Dotty and explains some of the lazy val pitfalls as currently present in Scala and how their implementation in Dotty differs. This post is a discussion of lazy vals in general followed by some of the examples shown in Dmitry Petrashko’s talk, as well as some further notes and insights.

How lazy works

The main characteristic of a lazy val is that the bound expression is not evaluated immediately, but once on the first access1. When the initial access happens, the expression is evaluated and the result bound to the identifier of the lazy val. On subsequent access, no further evaluation occurs: instead the stored result is returned immediately.

Given the characteristic above, using the lazy modifier seems like an innocent thing to do, when we are defining a val, why not also add a lazy modifier as a speculative “optimization”? In a moment we will see why this is typically not a good idea, but before we dive into this, let’s recall the semantics of a lazy val first.

When we assign an expression to a lazy val like this:

lazy val two: Int = 1 + 1

we expect that the expression 1 + 1 is bound to two, but the expression is not yet evaluated. On the first (and only on the first) access of two from somewhere else, the stored expression 1 + 1 is evaluated and the result (2 in this case) is returned. On subsequent access of twono evaluation happens: the stored result of the evaluation was cached and will be returned instead.

This property of “evaluate once” is a very strong one. Especially if we consider a multithreaded scenario: what should happen if two threads access our lazy val at the same time? Given the property that evaluation occurs only once, we have to introduce some kind of synchronization in order to avoid multiple evaluations of our bound expression. In practice, this means the bound expression will be evaluated by one thread, while the other(s) will have to wait until the evaluation has completed, after which the waiting thread(s) will see the evaluated result.

How is this mechanism implemented in Scala? Luckily, we can have a look at SIP-20. The example class LazyCell with a lazy val value is defined as follows:

final class LazyCell {
  lazy val value: Int = 42
}

A handwritten snippet equivalent to the code the compiler generates for our LazyCelllooks like this:

final class LazyCell {
  @volatile var bitmap_0: Boolean = false                   // (1)
  var value_0: Int = _                                      // (2)
  private def value_lzycompute(): Int = {
    this.synchronized {                                     // (3)
      if (!bitmap_0) {                                      // (4)
        value_0 = 42                                        // (5)
        bitmap_0 = true
      }
    }
    value_0
  }
  def value = if (bitmap_0) value_0 else value_lzycompute() // (6)
}

At (3) we can see the use of a monitor this.synchronized {...} in order to guarantee that initialization happens only once, even in a multithreaded scenario. The compiler uses a simple flag ((1)) to track the initialization status ((4) & (6)) of the var value_0 ((2)) which holds the actual value and is mutated on first initialization ((5)).

What we can also see in the above implementation is that a lazy val, other than a regular val has to pay the cost of checking the initialization state on each access ((6)). Keep this in mind when you are tempted to (try to) use lazy val as an “optimization”.

Now that we have a better understanding of the underlying mechanisms for the lazymodifier, let’s look at some scenarios where things get interesting.

Scenario 1: Concurrent initialization of multiple independent vals is sequential

Remember the use of this.synchronized { } above? This means we lock the whole instance during initialization. Furthermore, multiple lazy vals defined inside e.g., an object, but accessed concurrently from multiple threads will still all get initialized sequentially. The code snippet below demonstrates this, defining two lazy val ((1) & (2)) inside the ValStore object. In the object Scenario1 we request both of them inside a Future((3)), but at runtime each of the lazy val is calculated separately. This means we have to wait for the initialization of ValStore.fortyFive until we can continue with ValStore.fortySix.

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent._
import scala.concurrent.duration._

def fib(n: Int): Int = n match {
  case x if x < 0 =>
    throw new IllegalArgumentException(
      "Only positive numbers allowed")
  case 0 | 1 => 1
  case _ => fib(n-2) + fib(n-1)
}

object ValStore {
  lazy val fortyFive = fib(45)                   // (1)
  lazy val fortySix  = fib(46)                   // (2)
}

object Scenario1 {
  def run = {
    val result = Future.sequence(Seq(            // (3)
      Future {
        ValStore.fortyFive
        println("done (45)")
      },
      Future {
        ValStore.fortySix
        println("done (46)")
      }
    ))
    Await.result(result, 1.minute)
  }
}

You can test this by copying the above snippet and :paste-ing it into a Scala REPL and starting it with Scenario1.run. You will then be able to see how it firsts evaluates ValStore.fortyFive, then prints the text and afterwards does the same for the second lazy val. Instead of an object you can also imagine this case for a normal class, having multiple lazy vals defined.

Scenario 2: Potential dead lock when accessing lazy vals

In the previous scenario, we only had to suffer from decreased performance, when multiple lazy vals inside an instance are accessed from multiple threads at the same time. This may be surprising, but it is not a deal breaker. The following scenario is more severe:

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent._
import scala.concurrent.duration._

object A {
  lazy val base = 42
  lazy val start = B.step
}

object B {
  lazy val step = A.base
}

object Scenario2 {
  def run = {
    val result = Future.sequence(Seq(
      Future { A.start },                        // (1)
      Future { B.step }                          // (2)
    ))
    Await.result(result, 1.minute)
  }
}

Here we define three lazy val in two objects A and B. Here is a picture of the resulting dependencies:

Dependencies

The A.start val depends on B.step which in turn depends again on A.base. Although there is no cyclic relation here, running this code can lead to a deadlock:

scala> :paste
...
scala> Scenario2.run
java.util.concurrent.TimeoutException: Futures timed out after [1 minute]
  at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
  at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
  at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
  at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
  at scala.concurrent.Await$.result(package.scala:190)
  ... 35 elided

(if it succeeds by chance on your first try, give it another chance). So what is happening here? The deadlock occurs, because the two Future in (1) and (2), when trying to access the lazy val will both lock the respective object A / B, thereby denying any other thread access. In order to achieve progress however, the thread accessing A also needs B.step and the thread accessing B needs to access A.base. This is a deadlock situation. While this is a fairly simple scenario, imagine a more complex one, where more objects/classes are involved and you can see why overusing lazy val can get you in trouble. As in the previous scenario the same can occur inside class, although it is a little harder to construct the situation. In general this situation is unlikely to happen, because of the exact timing required to trigger the deadlock, but it is equally hard to reproduce in case you encounter it.

Scenario 3: Deadlock in combination with synchronization

Playing with the fact that lazy val initialization uses a monitor (synchronized), there is another scenario, where we can get in serious trouble.

import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent._
import scala.concurrent.duration._

trait Compute {
  def compute: Future[Int] =
    Future(this.synchronized { 21 + 21 })        // (1)
}

object Scenario3 extends Compute {
  def run: Unit = {
    lazy val someVal: Int =
      Await.result(compute, 1.minute)            // (2)
    println(someVal)
  }
}

Again, you can test this for yourself by copying it and doing a :paste inside a Scala REPL:

scala> :paste
...
scala> Scenario3.run
java.util.concurrent.TimeoutException: Futures timed out after [1 minute]
  at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
  at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
  at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
  at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
  at scala.concurrent.Await$.result(package.scala:190)
  at Scenario3$.someVal$lzycompute$1(<console>:62)
  at Scenario3$.someVal$1(<console>:62)
  at Scenario3$.run(<console>:63)
  ... 33 elided

The Compute trait on it’s own is harmless, but note that it uses synchronized in (1). In combination with the synchronized initialization of the lazy val inside Scenario3however, we have a deadlock situation. When we try to access the someVal ((2)) for the println call, the triggered evaluation of the lazy val will grab the lock on Scenario3, therefore preventing the compute to also get access: a deadlock situation.

Conclusion

Before we sum this post up, please note that in the examples above we use Future and synchronized, but we can easily get into the same situation by using other concurrency and synchronization primitives as well.

In summary, we had a look under the hood of Scala’s implementation of lazy vals and discussed some surprising cases:

  • sequential initialization due to monitor on instance
  • deadlock on concurrent access of lazy vals without cycle
  • deadlock in combination with other synchronization constructs

As you can see, lazy vals should not be used as a speculative optimization without further thought about the implications. Furthermore you might want to replace some of your lazy val with a regular val or def depending on your initialization needs after becoming aware of the issues above.
Luckily, the Dotty platform has an alternative implementation for lazy val initialization (by Dmitry Petrashko) which does not suffer from the unexpected pitfalls discussed in this post. For more information on Dotty you can watch Dmitry’s talk linked in the “references” section and head over to their github page.

All examples have been tested with Scala 2.11.7.

References

Footnotes:

1This is not completely true, initialization will be tried again in case of exceptions during the first access until the first successful initialization.

 

Getting started with Titan using Cassandra and Solr

Markus Hauck

Markus Hauck

Markus Hauck works as a consultant and Scala trainer at codecentric. He is a passionate functional programmer and loves to leverage the type system.

转载于:https://my.oschina.net/u/2963099/blog/1589130

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值