Lower Lock % and Number of Slow Queries

Gauges tracks several websites. Some get a lot of traffic and others don’t. The sites that get a lot of traffic tend to stay hot and sit in RAM. The sites that get little traffic, eventually get pushed out of RAM.

This is why for Gauges, we can have 1GB of RAM on the server and over 14GB of indexes, yet Mongo hums along. Even better yet, of that 1GB of RAMwe only use around 175MB.

RAM, What Needs to Fit There

You have probably heard that MongoDB recommends keeping all data in RAM if you can. If not, they suggest at least keeping your indexes in RAM.

What I haven’t heard people say as much, is that you really just need to keep your active data or indexes in RAM. For stats, the active data is the current day or week, which is a much smaller data set than all of the data for all time.

Write Heavy

The other interesting thing about Gauges is that we are extremely write heavy, as you would expect for a stats app. Requests due to the tracking script loading on a website are over 95% of all requests to Gauges.

Some of these track requests are for sites that rarely get hit, of which, the data that needs to be updated has been pushed out of RAM and is just sitting on disk.

Global Lock is Global

As you probably know, MongoDB has a global lock. The longer your writes take, the higher your lock percentage is. Updating documents that are in RAM is super fast.

Updating documents that have been pushed to disk, first have to be read from disk, stored in memory, updated, then written back to disk. This operation is slow and happens while inside the lock.

Updating a lot of random documents that rarely get updated and have been pushed out of RAM can lead to slow writes and a high lock percentage.

More Reads Make For Faster Writes

The trick to lowering your lock percentage and thus having faster updates is toquery the document you are going to update, before you perform the update. Querying before doing an upsert might seem counter intuitive at first glance, but it makes sense when you think about it.

The read ensures that whatever document you are going to update is in RAM. This means the update, which will happen immediately after the read, always updates the document in RAM, which is super fast. I think of it as warming the database for the document you are about to update.

Based on these graphs, I am pretty sure you will be able to tell that it was January 27th in the evening when I started pushing the query before update changes out:

Lock Percentage

Slow Queries

Obviously, this dramatically increased the number of queries that we performed, but it has added less than a few milliseconds to our application response time and the database is more happy.

Number of Reads/Writes

Reads are blue and writes are yellow.

Granted my explanation above is simplistic, but you get the gist. Query before update ensures that the updated document is in RAM and that the update is fast. I remember reading it somewhere and I kept thinking I should try it. Finally, I did, and it definitely helped.

Note: At the time of this writing, we are running MongoDB 1.8.×. MongoDB 2.x has significant improvements with regards to locking and pulling documents from disk, not to say that this technique won’t still help.

If you want to learn more about how we use MongoDB for Gauges, you can check out my MongoDB for Analytics presentation.

Ref: http://mongotips.com/b/lower-lock-and-number-of-slow-queries/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值