执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation
返回信息:
[root@node1 /]# hbase org.apache.hadoop.hbase.PerformanceEvaluation Java HotSpot(TM) 64-Bit Server VM warning: Using incremental CMS is deprecated and will likely be removed in a future release 19/04/08 15:08:51 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available Usage: java org.apache.hadoop.hbase.PerformanceEvaluation \ <OPTIONS> [-D<property=value>]* <command> <nclients> Options: nomapred Run multiple clients using threads (rather than use mapreduce) rows Rows each client runs. Default: 1048576 size Total size in GiB. Mutually exclusive with --rows. Default: 1.0. sampleRate Execute test on a sample of total rows. Only supported by randomRead. Default: 1.0 traceRate Enable HTrace spans. Initiate tracing every N rows. Default: 0 table Alternate table name. Default: 'TestTable' multiGet If >0, when doing RandomRead, perform multiple gets instead of single gets. Default: 0 compress Compression type to use (GZ, LZO, ...). Default: 'NONE' flushCommits Used to determine if the test should flush the table. Default: false writeToWAL Set writeToWAL on puts. Default: True autoFlush Set autoFlush on htable. Default: False oneCon all the threads share the same connection. Default: False presplit Create presplit table. If a table with same name exists, it'll be deleted and recreated (instead of verifying count of its existing regions). Recommended for accurate perf analysis (see guide). Default: disabled inmemory Tries to keep the HFiles of the CF inmemory as far as possible. Not guaranteed that reads are always served from memory. Default: false usetags Writes tags along with KVs. Use with HFile V3. Default: false numoftags Specify the no of tags that would be needed. This works only if usetags is true. Default: 1 filterAll Helps to filter out all the rows on the server side there by not returning any thing back to the client. Helps to check the server side performance. Uses FilterAllFilter internally. latency Set to report operation latencies. Default: False bloomFilter Bloom filter type, one of [NONE, ROW, ROWCOL] blockEncoding Block encoding to use. Value should be one of [NONE, PREFIX, DIFF, FAST_DIFF, PREFIX_TREE]. Default: NONE valueSize Pass value size to use: Default: 1000 valueRandom Set if we should vary value size between 0 and 'valueSize'; set on read for stats on size: Default: Not set. valueZipf Set if we should vary value size between 0 and 'valueSize' in zipf form: Default: Not set. period Report every 'period' rows: Default: opts.perClientRunRows / 10 = 104857 multiGet Batch gets together into groups of N. Only supported by randomRead. Default: disabled addColumns Adds columns to scans/gets explicitly. Default: true replicas Enable region replica testing. Defaults: 1. splitPolicy Specify a custom RegionSplitPolicy for the table. randomSleep Do a random sleep before each get between 0 and entered value. Defaults: 0 columns Columns to write per row. Default: 1 caching Scan caching to use. Default: 30 Note: -D properties will be applied to the conf used. For example: -Dmapreduce.output.fileoutputformat.compress=true -Dmapreduce.task.timeout=60000 Command: append Append on each row; clients overlap on keyspace so some concurrent operations checkAndDelete CheckAndDelete on each row; clients overlap on keyspace so some concurrent operations checkAndMutate CheckAndMutate on each row; clients overlap on keyspace so some concurrent operations checkAndPut CheckAndPut on each row; clients overlap on keyspace so some concurrent operations filterScan Run scan test using a filter to find a specific row based on it's value (make sure to use --rows=20) increment Increment on each row; clients overlap on keyspace so some concurrent operations randomRead Run random read test randomSeekScan Run random seek and scan 100 test randomWrite Run random write test scan Run scan test (read every row) scanRange10 Run random seek scan with both start and stop row (max 10 rows) scanRange100 Run random seek scan with both start and stop row (max 100 rows) scanRange1000 Run random seek scan with both start and stop row (max 1000 rows) scanRange10000 Run random seek scan with both start and stop row (max 10000 rows) sequentialRead Run sequential read test sequentialWrite Run sequential write test Args: nclients Integer. Required. Total number of clients (and HRegionServers) running. 1 <= value <= 500 Examples: To run a single client doing the default 1M sequentialWrites: $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 1 To run 10 clients doing increments over ten rows: $ bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=10 --nomapred increment 10
执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=100000 --presplit=100 sequentialWrite 100
返回信息:
…… 19/04/08 16:29:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.200.101.131:38670, server: node3/10.200.101.133:2181 19/04/08 16:29:03 INFO zookeeper.ClientCnxn: Session establishment complete on server node3/10.200.101.133:2181, sessionid = 0x3696fc9821c25bc, negotiated timeout = 60000 19/04/08 16:29:03 INFO zookeeper.ClientCnxn: Session establishment complete on server node3/10.200.101.133:2181, sessionid = 0x3696fc9821c25bb, negotiated timeout = 60000 java.lang.OutOfMemoryError: Java heap space Dumping heap to java_pid22929.hprof ... 19/04/08 16:29:03 INFO zookeeper.ClientCnxn: Session establishment complete on server node5/10.200.101.135:2181, sessionid = 0x1696fc9820c2788, negotiated timeout = 60000 19/04/08 16:29:03 INFO zookeeper.ClientCnxn: Session establishment complete on server node3/10.200.101.133:2181, sessionid = 0x3696fc9821c25ba, negotiated timeout = 60000 19/04/08 16:29:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.200.101.131:53514, server: node4/10.200.101.134:2181 19/04/08 16:29:03 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /10.200.101.131:53512, server: node4/10.200.101.134:2181 19/04/08 16:29:03 INFO zookeeper.ClientCnxn: Session establishment complete on server node5/10.200.101.135:2181, sessionid = 0x1696fc9820c2787, negotiated timeout = 60000 Heap dump file created [264519686 bytes in 0.962 secs] # # java.lang.OutOfMemoryError: Java heap space # -XX:OnOutOfMemoryError="kill -9 %p" # Executing /bin/sh -c "kill -9 22929"... Killed
分析内存使用情况,执行命令 free ,返回如下信息:
[root@node1 ~]# free total used free shared buff/cache available Mem: 65398900 13711168 26692112 115096 24995620 50890860 Swap: 29200380 0 29200380
第1行 Mem:
total:表示物理内存总量。65398900KB/1024=63866MB/1024=62GB 约等于64GB
total = used + free + buff/cache
available = free + buff/cache(部分)
buff:写 IO 缓存
cache:读 IO 缓存
查看哪些进程使用了内存,执行命令 ps aux ,返回如下信息(抽取部分与大数据有关的进程):
[root@node1 ~]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 193008 6016 ? Ss Mar06 1:25 /usr/lib/systemd/systemd --switched apache 7416 0.0 0.0 250184 4512 ? S Apr08 0:00 /usr/sbin/httpd -DFOREGROUND ntp 8384 0.0 0.0 29908 2128 ? Ss Mar06 0:09 /usr/sbin/ntpd -u ntp:ntp -g clouder+ 10612 0.7 4.1 19854420 2721248 ? Ssl Mar11 330:44 /usr/java/jdk1.7.0_67-cloudera/bin/ apache 13016 0.0 0.0 250184 4512 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 13022 0.0 0.0 250184 4516 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 13024 0.0 0.0 250184 4516 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 13026 0.0 0.0 250184 4516 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 13027 0.0 0.0 250184 4516 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 15856 0.0 0.0 250184 4512 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND hbase 18167 0.0 0.7 2921660 502856 ? Sl Mar20 6:32 /usr/java/jdk1.8.0_131/bin/java -Dp hbase 18169 0.0 1.0 3163740 707372 ? Sl Mar20 18:08 /usr/java/jdk1.8.0_131/bin/java -Dp hbase 18175 0.0 0.0 354300 7768 ? Sl Mar20 0:00 python2.7 /usr/lib64/cmf/agent/buil hbase 18240 0.0 0.0 354300 7776 ? Sl Mar20 0:00 python2.7 /usr/lib64/cmf/agent/buil hbase 18339 0.1 0.0 114244 2028 ? S Mar20 30:25 /bin/bash /usr/lib64/cmf/service/hb flume 19049 0.0 0.9 6981156 608600 ? Sl Mar20 6:32 /usr/java/jdk1.8.0_131/bin/java -Xm flume 19084 0.0 0.0 354296 7772 ? Sl Mar20 0:00 python2.7 /usr/lib64/cmf/agent/buil hive 697 0.0 3.3 10653904 2161868 ? Sl Mar12 25:05 /usr/java/jdk1.8.0_131/bin/java -Xm hive 705 0.0 0.0 354300 9812 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil oozie 2032 2.8 2.1 10948180 1378312 ? Sl Mar12 1152:53 /usr/java/jdk1.8.0_131/bin/java -D spark 2035 0.0 0.5 6388268 383484 ? Sl Mar12 13:58 /usr/java/jdk1.8.0_131/bin/java -cp oozie 2043 0.0 0.0 354300 7768 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil spark 2083 0.0 0.0 354300 7772 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil mysql 2639 0.0 0.0 113252 1588 ? Ss Mar06 0:00 /bin/sh /usr/bin/mysqld_safe --base mysql 3068 0.1 0.4 3488620 273256 ? Sl Mar06 83:58 /usr/libexec/mysqld --basedir=/usr hue 3940 0.0 0.0 114912 5340 ? S Mar12 0:24 /usr/sbin/httpd -f /run/cloudera-sc hue 3942 0.0 0.2 4274504 171760 ? Sl Mar12 8:06 python2.7 /opt/cloudera/parcels/CDH hue 3948 0.0 0.0 354300 7776 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil hue 3977 0.0 0.0 354300 7784 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil hue 3986 0.0 0.0 534948 5496 ? Sl Mar12 1:39 /usr/sbin/httpd -f /run/cloudera-sc hue 3987 0.0 0.0 534948 5360 ? Sl Mar12 1:41 /usr/sbin/httpd -f /run/cloudera-sc hue 3989 0.0 0.0 797092 5716 ? Sl Mar12 1:39 /usr/sbin/httpd -f /run/cloudera-sc hbase 18167 0.0 0.7 2921660 502856 ? Sl Mar20 6:32 /usr/java/jdk1.8.0_131/bin/java -Dp hbase 18169 0.0 1.0 3163740 707372 ? Sl Mar20 18:08 /usr/java/jdk1.8.0_131/bin/java -Dp hbase 18175 0.0 0.0 354300 7768 ? Sl Mar20 0:00 python2.7 /usr/lib64/cmf/agent/buil hbase 18240 0.0 0.0 354300 7776 ? Sl Mar20 0:00 python2.7 /usr/lib64/cmf/agent/buil hbase 18339 0.1 0.0 114244 2028 ? S Mar20 30:25 /bin/bash /usr/lib64/cmf/service/hb rpc 21449 0.0 0.0 65080 1432 ? Ss Mar13 0:01 /sbin/rpcbind -w hue 21535 0.0 0.0 534948 5616 ? Sl Mar13 1:34 /usr/sbin/httpd -f /run/cloudera-sc hdfs 31262 0.1 2.6 6264072 1719644 ? Sl Mar12 66:33 /usr/java/jdk1.8.0_131/bin/java -Dp hdfs 31268 0.0 0.0 354296 7772 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil hbase 31596 0.0 0.0 107960 364 ? S 16:35 0:00 sleep 1 mapred 31805 0.0 1.0 3004416 660832 ? Sl Mar12 23:13 /usr/java/jdk1.8.0_131/bin/java -Dp mapred 31968 0.0 0.0 354300 7780 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil yarn 32057 0.2 1.0 3198096 656728 ? Sl Mar12 90:15 /usr/java/jdk1.8.0_131/bin/java -Dp yarn 32182 0.0 0.0 354300 7764 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil
其中RSS列,就是物理内存使用量
VSZ:占用的虚拟内存大小
RSS:占用的物理内存大小
执行命令:
ps aux --sort -rss
根据占用的物理内存大小对进程进行排序,返回如下信息(截取):
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND clouder+ 10612 0.7 4.1 19854420 2721244 ? Ssl Mar11 330:59 /usr/java/jdk1.7.0_67-cloudera/bin/ hive 697 0.0 3.3 10653904 2161868 ? Sl Mar12 25:06 /usr/java/jdk1.8.0_131/bin/java -Xm hdfs 31262 0.1 2.6 6264072 1719644 ? Sl Mar12 66:35 /usr/java/jdk1.8.0_131/bin/java -Dp oozie 2032 2.8 2.1 10948180 1378308 ? Sl Mar12 1153:34 /usr/java/jdk1.8.0_131/bin/java -D root 11937 0.2 1.4 35422920 939500 pts/3 Sl+ 13:59 0:22 /usr/java/jdk1.8.0_131/bin/java -cp hbase 18169 0.0 1.0 3163740 707460 ? Sl Mar20 18:09 /usr/java/jdk1.8.0_131/bin/java -Dp mapred 31805 0.0 1.0 3004416 661020 ? Sl Mar12 23:15 /usr/java/jdk1.8.0_131/bin/java -Dp yarn 32057 0.2 1.0 3198096 657084 ? Sl Mar12 90:18 /usr/java/jdk1.8.0_131/bin/java -Dp flume 19049 0.0 0.9 6981156 608600 ? Sl Mar20 6:32 /usr/java/jdk1.8.0_131/bin/java -Xm hbase 18167 0.0 0.7 2921660 502856 ? Sl Mar20 6:32 /usr/java/jdk1.8.0_131/bin/java -Dp spark 2035 0.0 0.5 6388268 383484 ? Sl Mar12 13:58 /usr/java/jdk1.8.0_131/bin/java -cp gnome-i+ 5610 0.0 0.4 1917780 285324 ? Sl Mar06 9:56 gnome-shell --mode=initial-setup mysql 3068 0.1 0.4 3488620 273256 ? Sl Mar06 84:01 /usr/libexec/mysqld --basedir=/usr hue 3942 0.0 0.2 4274504 171760 ? Sl Mar12 8:06 python2.7 /opt/cloudera/parcels/CDH gnome-i+ 5728 0.0 0.1 1224000 122892 ? Sl Mar06 3:38 /usr/libexec/gnome-initial-setup root 24660 1.2 0.1 2889520 82528 ? Ssl Mar07 605:32 python2.7 /usr/lib64/cmf/agent/buil root 10109 0.2 0.0 3251780 46888 ? S<l Mar07 114:22 /root/vpnserver/vpnserver execsvc root 24784 0.0 0.0 687820 45908 ? Sl Mar07 17:33 python2.7 /usr/lib64/cmf/agent/buil root 4356 0.0 0.0 370232 27256 tty1 Ssl+ Mar06 0:44 /usr/bin/Xorg :0 -background none - polkitd 2008 0.0 0.0 541532 22864 ? Ssl Mar06 1:26 /usr/lib/polkit-1/polkitd --no-debu gnome-i+ 5549 0.0 0.0 944208 21060 ? Sl Mar06 0:04 /usr/libexec/gnome-settings-daemon gnome-i+ 5707 0.0 0.0 713768 20912 ? Sl Mar06 0:00 /usr/libexec/goa-daemon root 2487 0.0 0.0 553156 16576 ? Ssl Mar06 1:40 /usr/bin/python -Es /usr/sbin/tuned root 24723 0.0 0.0 223212 14680 ? Ss Mar07 20:40 /usr/lib64/cmf/agent/build/env/bin/ root 4272 0.0 0.0 614780 14256 ? Ssl Mar06 0:00 /usr/sbin/libvirtd geoclue 5946 0.0 0.0 423416 12652 ? Ssl Mar06 0:03 /usr/libexec/geoclue -t 5 root 24724 0.0 0.0 212528 12252 ? S Mar07 0:00 python2.7 /usr/lib64/cmf/agent/buil root 2067 0.0 0.0 440068 10784 ? Ssl Mar06 0:06 /usr/sbin/NetworkManager --no-daemo hive 705 0.0 0.0 354300 9812 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil gnome-i+ 5701 0.0 0.0 573252 9564 ? Sl Mar06 1:03 /usr/libexec/caribou gnome-i+ 5705 0.0 0.0 470024 9432 ? Sl Mar06 0:00 /usr/libexec/ibus-x11 --kill-daemon gnome-i+ 5511 0.0 0.0 562740 9048 ? Ssl Mar06 0:04 /usr/bin/gnome-session --autostart root 749 0.0 0.0 36944 8076 ? Ss Mar06 0:06 /usr/lib/systemd/systemd-journald colord 5613 0.0 0.0 404208 7956 ? Ssl Mar06 0:00 /usr/libexec/colord hue 3977 0.0 0.0 354300 7784 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil mapred 31968 0.0 0.0 354300 7780 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil hue 3948 0.0 0.0 354300 7776 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil hbase 18240 0.0 0.0 354300 7776 ? Sl Mar20 0:00 python2.7 /usr/lib64/cmf/agent/buil spark 2083 0.0 0.0 354300 7772 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil flume 19084 0.0 0.0 354296 7772 ? Sl Mar20 0:00 python2.7 /usr/lib64/cmf/agent/buil hdfs 31268 0.0 0.0 354296 7772 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil oozie 2043 0.0 0.0 354300 7768 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil hbase 18175 0.0 0.0 354300 7768 ? Sl Mar20 0:00 python2.7 /usr/lib64/cmf/agent/buil yarn 32182 0.0 0.0 354300 7764 ? Sl Mar12 0:00 python2.7 /usr/lib64/cmf/agent/buil gnome-i+ 5669 0.0 0.0 470304 7756 ? Sl Mar06 0:00 ibus-daemon --xim --panel disable root 5746 0.0 0.0 371100 7568 ? Ssl Mar06 0:07 /usr/lib/udisks2/udisksd --no-debug gnome-i+ 5681 0.0 0.0 469496 7536 ? Sl Mar06 0:04 /usr/libexec/mission-control-5 root 17723 0.0 0.0 247964 7436 ? Ss Mar11 0:31 /usr/sbin/httpd -DFOREGROUND root 11893 0.0 0.0 176904 7376 pts/3 S+ 13:59 0:00 python /opt/cloudera/parcels/CLABS_ root 15124 0.0 0.0 150052 7300 ? Ss 14:05 0:00 sshd: root@notty root 5725 0.0 0.0 405176 7288 ? Ssl Mar06 0:13 /usr/libexec/packagekitd root 2495 0.0 0.0 246484 6604 ? Ssl Mar06 0:03 /usr/sbin/rsyslogd -n gnome-i+ 5775 0.0 0.0 406024 6488 ? Sl Mar06 0:54 /usr/libexec/goa-identity-service root 5558 0.0 0.0 371916 6292 ? Ssl Mar06 0:03 /usr/libexec/upowerd root 2520 0.0 0.0 34712 6088 ? S<Ls Mar06 0:00 /usr/sbin/iscsid root 1 0.0 0.0 193008 6016 ? Ss Mar06 1:25 /usr/lib/systemd/systemd --switched gnome-i+ 5696 0.0 0.0 392948 5900 ? Sl Mar06 0:00 /usr/libexec/ibus-dconf root 21179 0.0 0.0 148536 5896 ? Ss 14:16 0:00 sshd: root@pts/6 root 2483 0.0 0.0 148148 5892 ? Ss 13:43 0:00 sshd: root@pts/0 root 31943 0.0 0.0 148280 5804 ? Ss 13:39 0:01 sshd: root@pts/4 root 31781 0.0 0.0 148280 5724 ? Ss 13:39 0:00 sshd: root@pts/3 hue 3989 0.0 0.0 797092 5716 ? Sl Mar12 1:39 /usr/sbin/httpd -f /run/cloudera-sc root 5504 0.0 0.0 358696 5648 ? Sl Mar06 0:07 gdm-session-worker [pam/gdm-launch- hue 21535 0.0 0.0 534948 5616 ? Sl Mar13 1:34 /usr/sbin/httpd -f /run/cloudera-sc hue 3986 0.0 0.0 534948 5496 ? Sl Mar12 1:39 /usr/sbin/httpd -f /run/cloudera-sc root 2034 0.0 0.0 419324 5472 ? Ssl Mar06 0:01 /usr/sbin/ModemManager gnome-i+ 5852 0.0 0.0 387672 5420 ? Sl Mar06 0:00 gnome-keyring-daemon --unlock hue 3987 0.0 0.0 534948 5360 ? Sl Mar12 1:41 /usr/sbin/httpd -f /run/cloudera-sc root 2011 0.0 0.0 212564 5348 ? Ss Mar06 0:00 /usr/sbin/abrtd -d -s hue 3940 0.0 0.0 114912 5340 ? S Mar12 0:24 /usr/sbin/httpd -f /run/cloudera-sc gnome-i+ 5578 0.0 0.0 380360 5236 ? Sl Mar06 0:00 /usr/libexec/gvfsd gnome-i+ 5815 0.0 0.0 495600 5096 ? Sl Mar06 0:00 /usr/libexec/gvfs-afc-volume-monito gnome-i+ 5741 0.0 0.0 400528 5084 ? Sl Mar06 0:03 /usr/libexec/gvfs-udisks2-volume-mo root 5945 0.0 0.0 320716 4824 ? Ssl Mar06 0:19 /usr/lib64/realmd/realmd root 2014 0.0 0.0 210264 4556 ? Ss Mar06 0:00 /usr/bin/abrt-watch-log -F BUG: WAR root 2017 0.0 0.0 210264 4552 ? Ss Mar06 0:00 /usr/bin/abrt-watch-log -F Backtrac apache 13022 0.0 0.0 250184 4516 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 13024 0.0 0.0 250184 4516 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 13026 0.0 0.0 250184 4516 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 13027 0.0 0.0 250184 4516 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 7416 0.0 0.0 250184 4512 ? S Apr08 0:00 /usr/sbin/httpd -DFOREGROUND apache 13016 0.0 0.0 250184 4512 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND apache 15856 0.0 0.0 250184 4512 ? S Apr07 0:00 /usr/sbin/httpd -DFOREGROUND
执行命令 jps ,查看JVM中运行的进程:
[root@node1 ~]# jps 2032 Bootstrap 11937 SqlLine 2035 HistoryServer 10612 Main 18167 RESTServer 18169 HMaster 32057 ResourceManager 697 RunJar 19049 Application 15789 Jps 31805 JobHistoryServer 31262 NameNode
执行命令 jmap -heap 31262 ,查看NameNode进程在JVM中的运行情况,返回如下信息:
[root@node1 ~]# jmap -heap 31262 Attaching to process ID 31262, please wait... Debugger attached successfully. Server compiler detected. JVM version is 25.131-b11 using parallel threads in the new generation. using thread-local object allocation. Concurrent Mark-Sweep GC Heap Configuration: MinHeapFreeRatio = 40 MaxHeapFreeRatio = 70 MaxHeapSize = 4294967296 (4096.0MB) NewSize = 1431633920 (1365.3125MB) MaxNewSize = 1431633920 (1365.3125MB) OldSize = 2863333376 (2730.6875MB) NewRatio = 2 SurvivorRatio = 8 MetaspaceSize = 21807104 (20.796875MB) CompressedClassSpaceSize = 1073741824 (1024.0MB) MaxMetaspaceSize = 17592186044415 MB G1HeapRegionSize = 0 (0.0MB) Heap Usage: New Generation (Eden + 1 Survivor Space): capacity = 1288503296 (1228.8125MB) used = 127735648 (121.81820678710938MB) free = 1160767648 (1106.9942932128906MB) 9.913490201890799% used Eden Space: capacity = 1145372672 (1092.3125MB) used = 125860384 (120.02981567382812MB) free = 1019512288 (972.2826843261719MB) 10.988596731597243% used From Space: capacity = 143130624 (136.5MB) used = 1875264 (1.78839111328125MB) free = 141255360 (134.71160888671875MB) 1.3101766397664836% used To Space: capacity = 143130624 (136.5MB) used = 0 (0.0MB) free = 143130624 (136.5MB) 0.0% used concurrent mark-sweep generation: capacity = 2863333376 (2730.6875MB) used = 76209560 (72.6791000366211MB) free = 2787123816 (2658.008399963379MB) 2.661567829955683% used 20921 interned Strings occupying 2090424 bytes.
修改上一次测试的参数,减少占用的资源,执行命令:
hbase org.apache.hadoop.hbase.PerformanceEvaluation --nomapred --rows=1000 --presplit=100 sequentialWrite 10
在这个测试中,把PE模式设为了非MapReduuce(--nomapred),即采用起线程的形式。跑的命令是sequentialWrite,即顺序写入、后面跟的10代表起了10个线程来做写入。--rows=1000 代表每个线程会写入1000行数据。presplit,表的预分裂region个数,在做性能测试时一定要设置region个数,不然所有的读写会落在一个region上,严重影响性能。PE工具的所有的输出都会直接写到LOG文件,LOG的位置需要参照HBase的设置。运行结束后,PE会分别打出每个线程的延迟状况。如下面是其中一个线程的结果:
19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Latency (us) : mean=56.74, min=8.00, max=347.00, stdDev=84.51, 50th=25.00, 75th=35.75, 95th=283.00, 99th=305.98, 99.9th=346.99, 99.99th=347.00, 99.999th=347.00 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Num measures (latency) : 1000 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Mean = 56.74 Min = 8.00 Max = 347.00 StdDev = 84.51 50th = 25.00 75th = 35.75 95th = 283.00 99th = 305.98 99.9th = 346.99 99.99th = 347.00 99.999th = 347.00 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: ValueSize (bytes) : mean=0.00, min=0.00, max=0.00, stdDev=0.00, 50th=0.00, 75th=0.00, 95th=0.00, 99th=0.00, 99.9th=0.00, 99.99th=0.00, 99.999th=0.00 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Num measures (ValueSize): 0 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Mean = 0.00 Min = 0.00 Max = 0.00 StdDev = 0.00 50th = 0.00 75th = 0.00 95th = 0.00 99th = 0.00 99.9th = 0.00 99.99th = 0.00 99.999th = 0.00 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Test : SequentialWriteTest, Thread : TestClient-3 19/04/09 18:29:32 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1696fc9820c336b
以及如下信息:
19/04/09 18:29:32 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x1696fc9820c3368 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 334ms at offset 9000 for 1000 rows (2.94 MB/s) 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished TestClient-9 in 334ms over 1000 rows 19/04/09 18:29:32 INFO zookeeper.ZooKeeper: Session: 0x1696fc9820c3368 closed 19/04/09 18:29:32 INFO zookeeper.ClientCnxn: EventThread shut down 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 324ms at offset 6000 for 1000 rows (3.03 MB/s) 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished TestClient-6 in 324ms over 1000 rows 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 331ms at offset 0 for 1000 rows (2.97 MB/s) 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished TestClient-0 in 331ms over 1000 rows 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 336ms at offset 2000 for 1000 rows (2.93 MB/s) 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished TestClient-2 in 336ms over 1000 rows 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 335ms at offset 4000 for 1000 rows (2.94 MB/s) 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished TestClient-4 in 335ms over 1000 rows 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished class org.apache.hadoop.hbase.PerformanceEvaluation$SequentialWriteTest in 342ms at offset 8000 for 1000 rows (2.87 MB/s) 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: Finished TestClient-8 in 342ms over 1000 rows 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: [SequentialWriteTest] Summary of timings (ms): [331, 318, 336, 326, 335, 314, 324, 323, 342, 334] 19/04/09 18:29:32 INFO hbase.PerformanceEvaluation: [SequentialWriteTest] Min: 314ms Max: 342ms Avg: 328ms 19/04/09 18:29:32 INFO client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x3696fc9821c31b9 19/04/09 18:29:32 INFO zookeeper.ZooKeeper: Session: 0x3696fc9821c31b9 closed 19/04/09 18:29:32 INFO zookeeper.ClientCnxn: EventThread shut down