博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
[ZZ]** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}
阅读量:5098 次
发布时间:2019-06-13

本文共 4921 字,大约阅读时间需要 16 分钟。

现象:查看页面,发现数据出现异常,今天生成数据比平常水平偏低好多,不大正常

原因查找:查看日志文件,发现有出现了几个这样的警告:** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}

在查询时发现好多老外遇到这个问题,这儿要说一点,老外在描述问题上很厉害,这儿把它描述的原因copy下来,和我这儿的情况差不多(当然最后他也没得到想要的结果,不过这是另一说,非重点)

we encountered the following mnesia warning report in our system log:
Mnesia is overloaded: {dump_log, write_threshold}
The log contains several such reports within one second and then
nothing for a while.
Our setup:
* The core is one mnesia table of type disc_copies that contains
persistent state of all entities (concurrent processes) in our
system (one table row for one entity).
* The system consists of 20 such entities.
* Each entity is responsible for updating its state in the table
whenever it changes.
* We use mnesia:dirty_write/2, because we have no dependency
among tables and each entity updates its state only.
In the worst case, there is 20 processes that want to write to the
table but each to a different row.
Our questions:
* What precisely does the report mean?
* Can we do something about it?
* We plan to scale from units to thousands of entities. Will this
be a problem? If so, how can we overcome it? If not, why not?

 

(说的很是详细,这点上大多国人应该向老外学习滴!)这儿还是要先说一下我们的系统结构(虽然上面说的很好,我自己描述不出这么好,但还是简单的说一下吧):

我们这有一个模块等待接收数据,它每接收一个数据生成一个进程来进行处理它,然后往数据表里面写入数据,这就导致了这样一个问题:也是出现这个错误的原因——频繁的异步写入

错误解决:错误原因找到了,怎么解决呢?其实这个问题在老外那儿已经发生N次了,有人提议把这个加入FAQ,但不知道何时才OK,在此之前要有临时方案,于是我找到下面这个地方

If you’re using mnesia disc_copies tables and doing a lot of writes all at
once, you’ve probably run into the following message
=ERROR REPORT==== 10-Dec-2008::18:07:19 ===
Mnesia(node@host): ** WARNING ** Mnesia is overloaded: {dump_log,
write_threshold}
This warning event can get really annoying, especially when they start
happening every second. But you can eliminate them, or at least drastically
reduce their occurance.
Synchronous Writes

The first thing to do is make sure to use sync_transaction or sync_dirty.
Doing synchronous writes will slow down your writes in a good way, since
the functions won’t return until your record(s) have been written to the
transaction log. The alternative, which is the default, is to do asynchronous
writes, which can fill transaction log far faster than it gets dumped, causing
the above error report.
Mnesia Application Configuration

If synchronous writes aren’t enough, the next trick is to modify 2 obscure
configuration parameters. The mnesia_overload event generally occurs
when the transaction log needs to be dumped, but the previous transaction
log dump hasn’t finished yet. Tweaking these parameters will make the
transaction log dump less often, and the disc_copies tables dump to disk
more often. NOTE: these parameters must be set before mnesia is started;
changing them at runtime has no effect. You can set them thru the
command line or in a config file.
dc_dump_limit

This variable controls how often disc_copies tables are dumped from
memory. The default value is 4, which means if the size of the log is greater
than the size of table / 4, then a dump occurs. To make table dumps happen
more often, increase the value. I’ve found setting this to 40 works well for
my purposes.
dump_log_write_threshold

This variable defines the maximum number of writes to the transaction log
before a new dump is performed. The default value is 100, so a new
transaction log dump is performed after every 100 writes. If you’re doing
hundreds or thousands of writes in a short period of time, then there’s no
way mnesia can keep up. I set this value to 50000, which is a huge
increase, but I have enough RAM to handle it. If you’re worried that this high
value means the transaction log will rarely get dumped when there’s very
few writes occuring, there’s also a dump_log_time_threshold configuration
variable, which by default dumps the log every 3 minutes.
How it Works

I might be wrong on the theory since I didn’t actually write or design
mnesia, but here’s my understanding of what’s happening. Each mnesia
activity is recorded to a single transaction log. This transaction log then
gets dumped to table logs, which in turn are dumped to the table file on
disk. By increasing the dump_log_write_threshold, transaction log dumps
happen much less often, giving each dump more time to complete before the
next dump is triggered. And increasing dc_dump_limit helps ensure that the
table log is also dumped to disk before the next transaction dump occurs.
引用地址:How to Eliminate Mnesia Overload Events
这儿说了两种解决方案,一种是避免频繁的异步写入,另一个是把mnesia对应的配置文件权限放宽

1、这个哥推荐用sync_transaction 或者 sync_dirty来进行写入操作,认为异步写入是导致出现这个错误的原因。

2、对配置文件进行修改是在启动erlang时进行的:这哥推荐修改dc_dump_limit的设置由4改为40

修改dump_log_time_threshold 的设置由100改为50000,要想实现在启动erl时执行

erl -mnesia dump_log_write_threshold 50000 -mnesia dc_dump_limit 40

ok,下面说下这俩参数代表的意思:

dc_dump_limit:磁盘备份表从内存中被抛弃的时间间隔

dump_log_time_threshold:在新垃圾回收之前的最大的写入数(貌似翻译的不是很准哈,你能看明白就好~_~)

 

转载于:https://www.cnblogs.com/whymaths/archive/2013/02/12/2910373.html

你可能感兴趣的文章
Linux新建用户后的必要设置
查看>>
使用 Override 和 New 关键字进行版本控制
查看>>
安装Ubuntu的那些事儿
查看>>
求m区间内的最小值-单调队列
查看>>
转: 尽己力,无愧于心 FastReport.Net 常用功能总汇
查看>>
python版本的原型模式
查看>>
热血男儿
查看>>
Safari导入书签
查看>>
微软Silverlight的崛起
查看>>
[Android游戏开发]游戏框架的搭建
查看>>
android搞的一个登录界面
查看>>
php大转盘抽奖
查看>>
mysql轮廓总结
查看>>
[CF1103B]Game with modulo
查看>>
设计模式学习总结:(6)桥模式
查看>>
JS获取当前页面名称
查看>>
springboot基础知识
查看>>
vim置于后台,vim 编辑多文件
查看>>
高精模板(加减乘)
查看>>
【★】IT界8大恐怖预言
查看>>