mnesia is overloaded 及too many db tables问题总结


一、首先分析mnesia的dump原因

         

在《mnesia之transaction》里提到事务操作的数据及最终结果都会记录到latest.log文件中。注意只有涉及类型为disc_copies和disc_only_copies的表的操作才会记录日志到latest.log文件中,仅针对ram_copies类型的表的操作不会记录日志。

log(C) when C#commit.disc_copies == [],

           C#commit.disc_only_copies == [],

           C#commit.schema_ops == [] ->

   ignore;

为了防止日志文件不断增长从而导致占用大量的磁盘空间,mnesia会进行dump工作。所谓dump就是分析日志文件中记录的事务操作及最终结果,将实际的数据记录到*.DAT,*.DCL,*.DCD等文件中。

说明:mnesia数据存储实际上使用的是ets和dets,对于ram_copies类型的表使用ets;disc_copies类型的表使用也是ets,通过mnesia的dump将数据保存到后缀名为DCD(disc copydata)或者后缀名为DCL(disc copy log)的文件中,以做到数据的持久化;而disc_only_copies类型的表使用的是dets,保存的文件后缀名为DAT;schema表比较特殊,虽然使用的是dets表,但是同时会在内存中保存相关信息。

有几种情况会触发mnesia的dump:

(1)定时触发

mnesia启动后,由mnesia_controller进程设置定时器,触发dump

代码片段:

init([Parent]) ->

   process_flag(trap_exit, true),

   mnesia_lib:verbose("~p starting: ~p~n", [?SERVER_NAME,self()]),

   All = mnesia_lib:all_nodes(),

   Diff = All -- [node() | val(original_nodes)],

   mnesia_lib:unset(original_nodes),

   mnesia_recover:connect_nodes(Diff),

   Ref = next_async_dump_log(),

   mnesia_dumper:start_regulator(),

   Empty = gb_trees:empty(),

   {ok, #state{supervisor = Parent,

              dump_log_timer_ref= Ref,

              loader_queue= Empty,

              late_loader_queue= Empty}}.

 

next_async_dump_log() ->

   Interval = mnesia_monitor:get_env(dump_log_time_threshold),

   Msg = {next_async_dump_log, time_threshold},

   Ref = erlang:send_after(Interval, self(), Msg),

   Ref.

 

handle_info({next_async_dump_log, InitBy},State) ->

    async_dump_log(InitBy),

   Ref = next_async_dump_log(),

   noreply(State#state{dump_log_timer_ref=Ref});

定时dump的默认的时间间隔为3分钟

default_env(dump_log_time_threshold) ->

   timer:minutes(3);

可以在程序启动是增加参数-mnesia dump_log_time_threshold 300000 来设置时间间隔。

(2)一定数量的日志记录后触发

每次调用mnesia_log:log(C)或者mnesia_log:slog(C)进行日志记录时,都会将trans_log_writes_left的值减1,当该值小于等于0时,触发dump

mnesia_log:

log(C) ->

   case mnesia_monitor:use_dir() of

   true ->

       ...

       mnesia_dumper:incr_log_writes();

   false ->

       ignore

   end.

 

mnesia_dumper:

incr_log_writes() ->

   Left = mnesia_lib:incr_counter(trans_log_writes_left, -1),

   if

   Left > 0 ->

       ignore;

   true ->

       adjust_log_writes(true)

   end.

 

adjust_log_writes(DoCast) ->

   ...

   case DoCast of

   false ->

       ignore;

   true ->

       mnesia_controller:async_dump_log(write_threshold)

   end,

   ...

默认情况下,写入100条记录到latest文件中,便会触发dump

mnesia_monitor:

init(Parent) ->

   ...

   Left = get_env(dump_log_write_threshold),

    mnesia_lib:set_counter(trans_log_writes_left,Left),

   ...

同样可以通过在程序启动是增加参数 -mnesia dump_log_write_threshold 5000 进行设置。

dump操作会做如下几个事情:

(1) dump latest.log文件

将latest.log文件改名为previous.log,然后新建latest.log文件,然后分析previous.log文件中的内容,对于存储类型为disc_copies的表(非schema),检查DCL与DCD文件中的数据量,当sizeof(DCD)/sizeof(DCL)小于指定的阈值时,把表中的内容全部存储到DCD文件中,否则直接写到DCL文件中。默认的阈值大小为4,可以通过-mnesia dc_dump_limit Num进行设置。对于存储类型为disc_only_copies的表不做任何处理。

open_disc_copies(Tab, InitBy) ->

   DclF = mnesia_lib:tab2dcl(Tab),

   DumpEts =

       casefile:read_file_info(DclF) of

           {error, enoent} ->

              false;

           {ok, DclInfo} ->

              DcdF=  mnesia_lib:tab2dcd(Tab),

              casefile:read_file_info(DcdF) of

                  {error, Reason} ->

                     mnesia_lib:dbg_out("File~p info_error ~p ~n",

                                      [DcdF, Reason]),

                     true;

                  {ok, DcdInfo} ->

                     Mul= case ?catch_val(dc_dump_limit) of

                              {'EXIT', _} -> ?DumpToEtsMultiplier;

                              Val -> Val

                           end,

                     DcdInfo#file_info.size=< (DclInfo#file_info.size * Mul)

              end

       end,

   if

       DumpEts== false; InitBy == startup ->

           mnesia_log:open_log({?MODULE,Tab},

                            mnesia_log:dcl_log_header(),

                            DclF,

                            mnesia_lib:exists(DclF),

                            mnesia_monitor:get_env(auto_repair),

                            read_write),

           put({?MODULE, Tab}, {opened_dumper, dcl}),

           true;

       true->

           mnesia_log:ets2dcd(Tab),

           put({?MODULE, Tab}, already_dumped),

           false

   end.

(2)dump mnesia_decision表

将当前mnesia_decision表中的数据以覆盖形式存储到decision_tab.log文件中

================================================================

另外,mnesia启动和执行schema transaction时都会触发dump,schema transaction触发的dump会忽略日志文件中的内容,也不会对mnesia_decision表进行dump,仅针对本次操作涉及的内容进行dump,并且schema transaction触发的dump不会和其他情况触发的dump并行执行。

prepare_commit时锁住:

mnesia_schema:

 

prepare_commit(Tid, Commit, WaitFor) ->

   ...

    case Ops of

   [] ->

       ignore;

    _->

       %% We need to grab a dumper lock here, the log may not

       %% be dumped by others, during the schema commit phase.

       mnesia_controller:wait_for_schema_commit_lock()

    end

    ...

do_commit时进行dump,然后释放锁

mnesia_tm:

 

do_commit(Tid, C, DumperMode) ->

   mnesia_dumper:update(Tid, C#commit.schema_ops, DumperMode),

   ...

 

mnesia_dumper:

 

update(Tid, SchemaOps, DumperMode) ->

   UseDir = mnesia_monitor:use_dir(),

   Res = perform_update(Tid, SchemaOps, DumperMode, UseDir),

   mnesia_controller:release_schema_commit_lock(),

   Res.

二、mnesia isoverloaded 原因解析

    一种是避免频繁的异步写入,另一个是把mnesia对应的配置文件权限放宽。

详细分析见:http://streamhacker.com/2008/12/10/how-to-eliminate-mnesia-overload-events/

If you're using mnesia disc_copies tablesand doing a lot of writes all at once, you've probably run into the followingmessage

=ERROR REPORT==== 10-Dec-2008::18:07:19 ===

Mnesia(node@host): ** WARNING ** Mnesia isoverloaded: {dump_log, write_threshold}

This warning event can get really annoying,especially when they start happening every second. But you can eliminate them,or at least drastically reduce their occurance.

Synchronous Writes

The first thing to do is make sure to use sync_transaction orsync_dirty. Doing synchronous writes will slow down your writes in a good way,since the functions won't return until your record(s) have been written to thetransaction log. The alternative, which is the default, is to do asynchronouswrites, which can fill transaction log far faster than it gets dumped, causingthe above error report.

Mnesia Application Configuration

If synchronous writes aren't enough, thenext trick is to modify 2 obscure configurationparameters. The mnesia_overload event generally occurs when the transactionlog needs to be dumped, but the previous transaction log dump hasn't finishedyet. Tweaking these parameters will make the transaction log dump less often,and the disc_copies tables dump to disk more often. NOTE: these parametersmust be set before mnesia is started; changing them at runtime has no effect.You can set them thru the command line or in a config file.

dc_dump_limit

This variable controls how oftendisc_copies tables are dumped from memory. The default value is 4, which meansif the size of the log is greater than the size of table / 4, then a dump occurs.To make table dumps happen more often, increase the value. I've found settingthis to 40 works well for my purposes.

dump_log_write_threshold

This variable defines the maximum number ofwrites to the transaction log before a new dump is performed. The default valueis 100, so a new transaction log dump is performed after every 100 writes. Ifyou're doing hundreds or thousands of writes in a short period of time, thenthere's no way mnesia can keep up. I set this value to 50000, which is a hugeincrease, but I have enough RAM to handle it. If you're worried that this highvalue means the transaction log will rarely get dumped when there's very fewwrites occuring, there's also a dump_log_time_threshold configuration variable,which by default dumps the log every 3 minutes.

How it Works

I might be wrong on the theory since Ididn't actually write or design mnesia,but here's my understanding of what's happening. Each mnesia activity isrecorded to a single transaction log. This transaction log then gets dumped totable logs, which in turn are dumped to the table file on disk. By increasingthe dump_log_write_threshold, transaction log dumps happen much less often,giving each dump more time to complete before the next dump is triggered. Andincreasing dc_dump_limit helps ensure that the table log is also dumped to diskbefore the next transaction dump occurs.

三、解决方案

1、这个哥推荐用sync_transaction 或者 sync_dirty来进行写入操作,认为异步写入是导致出现这个错误的原因。

2、对配置文件进行修改是在启动erlang时进行的:这哥推荐修改dc_dump_limit的设置由4改为40 

修改dump_log_time_threshold的设置由100改为50000,要想实现在启动erl时执行

erl -mnesia dump_log_write_threshold 50000-mnesia dc_dump_limit 40

四、too many db tables问题总结

 Ets-tables
The default is 1400, can be changed with the environment variable ERL_MAX_ETS_TABLES.

这个值非常的偏保守,我们通常的服务器都有几十G的内存,因为ETS基本是消耗内存的,所以我们不介意都开大点。

回到前面的问题,ssh出问题的原因是它每个链接需要3个ets, 而mnesia一个事务也要消耗1个ets表。
知道了问题的本质就很容易解决问题:
erl -env ERL_MAX_ETS_TABLES NNNNN就好了。

再来顺手看下ejabberd的配置文件的说明:

# ERL_MAX_ETS_TABLES: Maximum number of ETS and Mnesia tables
#
# The number of concurrent ETS and Mnesia tables is limited. When the limit is
# reached, errors will appear in the logs:
# ** Too many db tables **
# You can safely increase this limit when starting ejabberd. It impacts memory
# consumption but the difference will be quite small.
#
# Default: 1400
#
#ERL_MAX_ETS_TABLES=1400

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值