cassandra问题:
下面程序并发300线程向cassandra中灌数据:
python cassandra_stress.py -p 300 -n 1000000
数据估算:
三个节点,每个节点上期一个进程,每个进程300个并发线程向cassandra中写数据,每个key的大小为1.916KB,Write Latency: 0.014 ms.
单个节点的数据写入速率:
-->300*(1*1000/0.014)*1.916KB=39.16GB/s
结果数据搜集,验证:
当前的commitlog文件大小:
/cassandra/data # du -sh *
34G commitlog
53G data
20K saved_caches
统计commitlog文件数:
>cassandra/data/commitlog # ls -al |grep .log |grep -v .header|wc -l
268
引入的问题:
1.当大量的数据写入请求写内存,memtable不能及时的刷新到sstable,导致commitlog不能清楚,不断累积,一度达到268个,导致当前节点宕机。
?明确一下具体的宕机条件?
2.当commitlog目录积累大量commitlog时启动cassandra,报告如下虚拟内存错误。
--解决方法:
ulimit -v unlimited
# /opt/galax/gcs/watchdog/script/shell/Cassandra-oper.sh start
user is root
Begin to start cassandra service.
Begin to check system environment
Cassandra home is: /opt/cassandra
Java home is: /usr/java/jdk1.6.0_24
Check system environment end.
cassandra is starting...
Exception encountered during startup.
java.io.IOError: java.io.IOException: Map failed
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:172)
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:149)
at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:326)
at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:191)
at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:226)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:472)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:453)
at org.apache.cassandra.db.Table.initCf(Table.java:317)
at org.apache.cassandra.db.Table.(Table.java:254)
at org.apache.cassandra.db.Table.open(Table.java:110)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:160)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:314)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:79)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:164)
... 12 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)
... 13 more
cassandra is starting...
Result:failed[@more@]
下面程序并发300线程向cassandra中灌数据:
python cassandra_stress.py -p 300 -n 1000000
数据估算:
三个节点,每个节点上期一个进程,每个进程300个并发线程向cassandra中写数据,每个key的大小为1.916KB,Write Latency: 0.014 ms.
单个节点的数据写入速率:
-->300*(1*1000/0.014)*1.916KB=39.16GB/s
结果数据搜集,验证:
当前的commitlog文件大小:
/cassandra/data # du -sh *
34G commitlog
53G data
20K saved_caches
统计commitlog文件数:
>cassandra/data/commitlog # ls -al |grep .log |grep -v .header|wc -l
268
引入的问题:
1.当大量的数据写入请求写内存,memtable不能及时的刷新到sstable,导致commitlog不能清楚,不断累积,一度达到268个,导致当前节点宕机。
?明确一下具体的宕机条件?
2.当commitlog目录积累大量commitlog时启动cassandra,报告如下虚拟内存错误。
--解决方法:
ulimit -v unlimited
# /opt/galax/gcs/watchdog/script/shell/Cassandra-oper.sh start
user is root
Begin to start cassandra service.
Begin to check system environment
Cassandra home is: /opt/cassandra
Java home is: /usr/java/jdk1.6.0_24
Check system environment end.
cassandra is starting...
Exception encountered during startup.
java.io.IOError: java.io.IOException: Map failed
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:172)
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:149)
at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:326)
at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:191)
at org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:226)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:472)
at org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:453)
at org.apache.cassandra.db.Table.initCf(Table.java:317)
at org.apache.cassandra.db.Table.(Table.java:254)
at org.apache.cassandra.db.Table.open(Table.java:110)
at org.apache.cassandra.service.AbstractCassandraDaemon.setup(AbstractCassandraDaemon.java:160)
at org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:314)
at org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:79)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:748)
at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:164)
... 12 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:745)
... 13 more
cassandra is starting...
Result:failed[@more@]
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/23937368/viewspace-1055368/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/23937368/viewspace-1055368/