SFX:property caching madness in JIT

转自:http://webkit.sed.hu/blog/20090528/technical-discussion-part-3-property-caching-madness-jit

Technical discussion part 3: property caching madness in JIT

Dynamic languages like JavaScript have a lot of interesting fetures: we can create or destroy new classes during runtime or assign anything to any variable regardless of its type. These features makes them popular and since computers are getting faster, more and more tasks are performed by these languages. The fact that a language is dynamic does not necessarily mean that programs written in it have to be slow. Perhaps they will never be as fast as a compiled language, but there are some nice optimization algorithms for them. Those algorithms are different form static compiler optimizations. One such feature is property and call target caching. Let's see how these cachings are implemented in WebKit.

Property caching is based on the observation that the type of a value at a given code location is the same most of the time even for dynamic languages. The following example shows this behaviour in practice:

var array = [ "x", "y", "z" ]
var object = { separator : ","  }
var s = "a"
for (i = 0; < 10, ++i) {
  s += array[i % 3] + object.separator
}

As you can see, the types do not change in this small example, the variables retain their type even after an assignment ( s+= ...) operator. Resolving an identifier using the current scope chain or using a member of an object is a very slow operation. How can we make it faster? Let's cache the type and the result of the last resolve operation. Next time, when this particular location is reached again, we only have to compare the type of the variable to the cached type. If they are the same, we can use the cached value. This is true for function calls as well. We can cache the target of calls, and use this cached value for fast calls (that means we can skip several checks, like "Has the JIT code been generated already?" or "Does the arity of the function match the number of passed arguments?")

In the following, I will focus on the JIT level. The JIT compiler provides some utility functions for the high-level resolve operations. When there is an opportunity to cache a value, those functions are called. The cache itself operates in two different ways. One way is to patch the code itself, and store these values directly into the code. Since those values are pointers or 32 bit integers, we put these constants into the constant pool on ARM and do not directly patch the payload field of the instructions like it is done on x86. Sometimes even the instructions must be changed. The JSObject in JavaScriptCore can store its members in an inline buffer, provided that the number of these members is at most four (this approach saves some memory):

class JSObject {
  ....
  union {
    PropertyStorage* propertyStorage;
    JSValue inlineStorage[4];
  }
};

By default, a "ldr reg, [JSObjectPtrReg, GET_FIELD_FFSET(JSObject, propertyStorage)]" instruction is generated into the jit code to access the cached property storage. This ldr instruction may be patched to an "add reg, JSObjectPtrReg, GET_FIELD_FFSET(JSObject, propertyStorage)" if the cached JSObject does not allocate a propertyStorage object. On x86, this is a "mov reg, [address]" to "lea reg, [address]" transformation.

The second way is creating small stub functions, which compare the input type to their cached value. These stub functions are connected by unconditional branches. The following figure is intended to show the general concept so some technical details are omitted:

[get_by_id fast case entry]
[check type] --- fail --> [stub entry]
[get cached value]        [check type] --- fail --> [stub entry]
      <----- return ----  [get cached value]        [check type] --- fail --> [slow case entry]
      <----------------- return ------------------  [get cached value]        [call resolve]
      <------------------------------- return ------------------------------  [return]
[get_by_id done]

There is a limit for such functions, which is 4 for a particular operation, to avoid long chains. Probably it is not worth to cache more than 4 values, it is just a waste of memory and speed.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值