rendis分布式缓存应用-安装-性能测试,报错make[3]: Entering directory

1.centos安装

1
jar包准备
2

2.解压redis




 
  
  [cevent@hadoop213 redis-3.0.4]$ tar -zxvf redis-3.0.4.tar.gz -C /opt/module/
   
  [cevent@hadoop213 soft]$ cd /opt/module/
  [cevent@hadoop213 module]$ ll
  总用量 12
  drwxr-xr-x. 12 cevent cevent 4096 7月   1 13:43 hadoop-2.7.2
  drwxr-xr-x.  8 cevent cevent 4096 3月  24 09:14 jdk1.7.0_79
  drwxrwxr-x.  6 cevent cevent 4096 9月   8 2015 redis-3.0.4
  [cevent@hadoop213 module]$ cd redis-3.0.4/
  [cevent@hadoop213 redis-3.0.4]$ ll
  总用量 148
  -rw-rw-r--.  1 cevent cevent 31391 9月   8 2015 00-RELEASENOTES
  -rw-rw-r--.  1 cevent cevent    53 9月   8 2015 BUGS
  -rw-rw-r--.  1 cevent cevent  1439 9月   8 2015 CONTRIBUTING
  -rw-rw-r--.  1 cevent cevent  1487 9月   8 2015 COPYING
  drwxrwxr-x.  6 cevent cevent  4096 9月   8 2015 deps
  -rw-rw-r--.  1 cevent cevent    11 9月   8 2015 INSTALL
  -rw-rw-r--.  1 cevent cevent   151 9月   8 2015 Makefile
  -rw-rw-r--.  1 cevent cevent  4223 9月   8 2015 MANIFESTO
  -rw-rw-r--.  1 cevent cevent  5201 9月   8 2015 README
  -rw-rw-r--.  1 cevent cevent 41403 9月   8 2015 redis.conf
  -rwxrwxr-x.  1 cevent cevent   271 9月   8 2015 runtest
  -rwxrwxr-x.  1 cevent cevent   280 9月   8 2015 runtest-cluster
  -rwxrwxr-x.  1 cevent cevent   281 9月   8 2015 runtest-sentinel
  -rw-rw-r--.  1 cevent cevent  7109 9月   8 2015 sentinel.conf
  drwxrwxr-x.  2 cevent cevent  4096 9月   8 2015 src
  drwxrwxr-x. 10 cevent cevent  4096 9月   8 2015 tests
  drwxrwxr-x.  5 cevent cevent  4096 9月   8 2015 utils
   
  
 


3.make报错




 
  
  [cevent@hadoop213 redis-3.0.4]$ make
  cd src && make all
  make[1]: Entering directory `/opt/module/redis-3.0.4/src'
  rm -rf redis-server redis-sentinel
  redis-cli redis-benchmark redis-check-dump redis-check-aof *.o *.gcda *.gcno
  *.gcov redis.info lcov-html
  (cd ../deps && make distclean)
  make[2]: Entering directory
  `/opt/module/redis-3.0.4/deps'
  (cd hiredis && make clean) >
  /dev/null || true
  (cd linenoise && make clean) >
  /dev/null || true
  (cd lua && make clean) >
  /dev/null || true
  (cd jemalloc && [ -f Makefile ]
  && make distclean) > /dev/null || true
  (rm -f .make-*)
  make[2]: Leaving directory
  `/opt/module/redis-3.0.4/deps'
  (rm -f .make-*)
  echo STD=-std=c99 -pedantic >>
  .make-settings
  echo WARN=-Wall -W >>
  .make-settings
  echo OPT=-O2 >> .make-settings
  echo MALLOC=jemalloc >>
  .make-settings
  echo CFLAGS= >> .make-settings
  echo LDFLAGS= >> .make-settings
  echo REDIS_CFLAGS= >>
  .make-settings
  echo REDIS_LDFLAGS= >>
  .make-settings
  echo PREV_FINAL_CFLAGS=-std=c99 -pedantic
  -Wall -W -O2 -g -ggdb  
  -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src -DUSE_JEMALLOC
  -I../deps/jemalloc/include >> .make-settings
  echo PREV_FINAL_LDFLAGS=  -g -ggdb -rdynamic >> .make-settings
  (cd ../deps && make hiredis
  linenoise lua jemalloc)
  make[2]: Entering directory
  `/opt/module/redis-3.0.4/deps'
  (cd hiredis && make clean) >
  /dev/null || true
  (cd linenoise && make clean) >
  /dev/null || true
  (cd lua && make clean) >
  /dev/null || true
  (cd jemalloc && [ -f Makefile ]
  && make distclean) > /dev/null || true
  (rm -f .make-*)
  (echo "" > .make-ldflags)
  (echo "" > .make-cflags)
  MAKE hiredis
  cd hiredis && make static
  make[3]: Entering directory
  `/opt/module/redis-3.0.4/deps/hiredis'
  gcc -std=c99 -pedantic -c -O3 -fPIC 
  -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb  net.c
  make[3]: gcc:命令未找到
  make[3]: *** [net.o] 错误 127
  make[3]: Leaving directory `/opt/module/redis-3.0.4/deps/hiredis'
  make[2]: *** [hiredis] 错误 2
  make[2]: Leaving directory
  `/opt/module/redis-3.0.4/deps'
  make[1]: [persist-settings] 错误 2 (忽略)
      CC adlist.o
  /bin/sh: cc: command not found
  make[1]: *** [adlist.o] 错误 127
  make[1]: Leaving directory
  `/opt/module/redis-3.0.4/src'
  make: *** [all] 错误 2
  
 


4.GCC介绍




 
  
  gcc是linux下的一个编译程序,是C程序的编译工具。
  GCC(GNU Compiler Collection) 是 GNU(GNU's Not
  Unix) 计划提供的编译器家族,它能够支持 C, C++, Objective-C, Fortran, Java 和 Ada 等等程序设计语言前端,同时能够运行在 x86, x86-64, IA-64, PowerPC, SPARC 和 Alpha 等等几乎目前所有的硬件平台上。鉴于这些特征,以及 GCC 编译代码的高效性,使得 GCC 成为绝大多数自由软件开发编译的首选工具。虽然对于程序员们来说,编译器只是一个工具,除了开发和维护人员,很少有人关注编译器的发展,但是 GCC 的影响力是如此之大,它的性能提升甚至有望改善所有的自由软件的运行效率,同时它的内部结构的变化也体现出现代编译器发展的新特征。
   
  
 


5.配置yum




 
  
  [cevent@hadoop213 ~]$ cd /etc/yum.repos.d/
  [cevent@hadoop213 yum.repos.d]$ ll
  总用量 28
  -rw-r--r--. 1 root root 1881 3月  14 10:02 CentOS-Base.repo
  -rw-r--r--. 1 root root 1991 5月  19 2016 CentOS-Base.repo.bak
  -rw-r--r--. 1 root root  647 5月  19 2016
  CentOS-Debuginfo.repo
  -rw-r--r--. 1 root root  289 5月  19 2016
  CentOS-fasttrack.repo
  -rw-r--r--. 1 root root  630 5月  19 2016 CentOS-Media.repo
  -rw-r--r--. 1 root root 6259 5月  19 2016 CentOS-Vault.repo
  [cevent@hadoop213 yum.repos.d]$ sudo vi
  CentOS-Base.repo
  [sudo] password for cevent: 
   
  # CentOS-Base.repo
  #
  # The mirror system uses the connecting
  IP address of the client and the
  # update status of each mirror to pick
  mirrors that are updated to and
  # geographically close to the
  client.  You should use this for CentOS
  updates
  # unless you are manually picking other
  mirrors.
  #
  # If the mirrorlist= does not work for
  you, as a fall back you can try the
  # remarked out baseurl= line instead.
  #
  #
  [base]
  name=CentOS-$releasever - Base - 163.com
  baseurl=http://mirrors.163.com/centos/$releasever/os/$basearch/
  #mirrorlist=file:///mnt/cdrom
  gpgcheck=1
  enable=1
  gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
   
  #released updates
  [updates]
  name=CentOS-$releasever - Updates - 163.com
  baseurl=http://mirrors.163.com/centos/$releasever/updates/$basearch/
  #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
  gpgcheck=1
  gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
   
  #additional packages that may be useful
  [extras]
  name=CentOS-$releasever - Extras - 163.com
  baseurl=http://mirrors.163.com/centos/$releasever/extras/$basearch/
  #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
  gpgcheck=1
  gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
   
  #additional packages that extend functionality of existing packages
  [centosplus]
  name=CentOS-$releasever - Plus - 163.com
  baseurl=http://mirrors.163.com/centos/$releasever/centosplus/$basearch/
  #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
  gpgcheck=1
  enabled=0
  gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
   
  #contrib - packages by Centos Users
  [contrib]
  name=CentOS-$releasever - Contrib - 163.com
  baseurl=http://mirrors.163.com/centos/$releasever/contrib/$basearch/
  #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=contrib
  gpgcheck=1
  enabled=0
  gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-6
   
  
 


6.安装GCC(yum)




 
  
  [cevent@hadoop213 CentOS_6.8_Final]$ yum -y install gcc
  已加载插件:fastestmirror, refresh-packagekit, security
  你需要以 root 身份执行此命令。
  [cevent@hadoop213 CentOS_6.8_Final]$ sudo
  yum -y install gcc
  [sudo] password for cevent: 
  已加载插件:fastestmirror, refresh-packagekit, security
  设置安装进程
  Loading mirror speeds from cached
  hostfile
  base                                                    
  | 3.7 kB     00:00     
  base/primary_db                                          |
  4.7 MB     00:01     
  extras                                                   | 3.4 kB     00:00    
  
  updates                                                 
  | 3.4 kB     00:00     
  解决依赖关系
  --> 执行事务检查
  ---> Package gcc.x86_64 0:4.4.7-23.el6
  will be 安装
  --> 处理依赖关系 libgomp = 4.4.7-23.el6,它被软件包 gcc-4.4.7-23.el6.x86_64 需要
  --> 处理依赖关系 cpp = 4.4.7-23.el6,它被软件包 gcc-4.4.7-23.el6.x86_64 需要
  --> 处理依赖关系 libgcc >= 4.4.7-23.el6,它被软件包 gcc-4.4.7-23.el6.x86_64 需要
  --> 处理依赖关系 cloog-ppl >= 0.15,它被软件包 gcc-4.4.7-23.el6.x86_64 需要
  --> 执行事务检查
  作为依赖被安装:
   
  cloog-ppl.x86_64 0:0.15.7-1.2.el6     
      cpp.x86_64
  0:4.4.7-23.el6          
   
  mpfr.x86_64 0:2.4.1-6.el6                  ppl.x86_64
  0:0.10.2-11.el6         
   
  作为依赖被升级:
   
  libgcc.i686 0:4.4.7-23.el6             
  libgcc.x86_64 0:4.4.7-23.el6         
  
   
  libgomp.x86_64 0:4.4.7-23.el6         
  
   
  完毕!
   
  [cevent@hadoop213 CentOS_6.8_Final]$ yum -y install gcc-c++
  已加载插件:fastestmirror, refresh-packagekit, security
  你需要以 root 身份执行此命令。
  [cevent@hadoop213 CentOS_6.8_Final]$ sudo
  yum -y install gcc-c++
  已加载插件:fastestmirror, refresh-packagekit, security
  设置安装进程
   
  已安装:
   
  gcc-c++.x86_64 0:4.4.7-23.el6                                                
  
   
  作为依赖被安装:
   
  libstdc++-devel.x86_64 0:4.4.7-23.el6                                         
   
  作为依赖被升级:
   
  libstdc++.x86_64 0:4.4.7-23.el6                                               
   
  完毕!

  

  
  [cevent@hadoop213 CentOS_6.8_Final]$ gcc -v
  使用内建 specs。
  目标:x86_64-redhat-linux
  配置为:../configure --prefix=/usr --mandir=/usr/share/man
  --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla
  --enable-bootstrap --enable-shared --enable-threads=posix
  --enable-checking=release --with-system-zlib --enable-__cxa_atexit
  --disable-libunwind-exceptions --enable-gnu-unique-object
  --enable-languages=c,c++,objc,obj-c++,java,fortran,ada --enable-java-awt=gtk
  --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre
  --enable-libgcj-multifile --enable-java-maintainer-mode
  --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib
  --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686
  --build=x86_64-redhat-linux
  线程模型:posix
  gcc 版本 4.4.7 20120313 (Red Hat 4.4.7-23) (GCC)
   
  
 


7.执行make distclean清洗之前的数据




 
  
  [cevent@hadoop213 redis-3.0.4]$ make distclean
  cd src && make distclean
  make[1]: Entering directory
  `/opt/module/redis-3.0.4/src'
  rm -rf redis-server redis-sentinel
  redis-cli redis-benchmark redis-check-dump redis-check-aof *.o *.gcda *.gcno
  *.gcov redis.info lcov-html
  (cd ../deps && make distclean)
  make[2]: Entering directory
  `/opt/module/redis-3.0.4/deps'
  (cd hiredis && make clean) >
  /dev/null || true
  (cd linenoise && make clean) >
  /dev/null || true
  (cd lua && make clean) >
  /dev/null || true
  (cd jemalloc && [ -f Makefile ]
  && make distclean) > /dev/null || true
  (rm -f .make-*)
  make[2]: Leaving directory
  `/opt/module/redis-3.0.4/deps'
  (rm -f .make-*)
  make[1]: Leaving directory
  `/opt/module/redis-3.0.4/src'
   
  [cevent@hadoop213 redis-3.0.4]$ make 执行redis安装
     
  LINK redis-server
     
  INSTALL redis-sentinel
     
  CC redis-cli.o
     
  LINK redis-cli
     
  CC redis-benchmark.o
     
  LINK redis-benchmark
     
  CC redis-check-dump.o
     
  LINK redis-check-dump
     
  CC redis-check-aof.o
     
  LINK redis-check-aof
   
  Hint: It's a good idea to run 'make test' ;)
   
  make[1]: Leaving directory
  `/opt/module/redis-3.0.4/src'
   
  [cevent@hadoop213 redis-3.0.4]$ make install  最终执行make install测试结果
  cd src && make install
  make[1]: Entering directory
  `/opt/module/redis-3.0.4/src'
   
  Hint: It's a good idea to run 'make test'
  ;)
   
     
  INSTALL install
     
  INSTALL install
     
  INSTALL install
     
  INSTALL install
     
  INSTALL install
  make[1]: Leaving directory `/opt/module/redis-3.0.4/src'
   
  
 


8.安装redis程序的默认位置




 
  
  [cevent@hadoop213 ~]$ cd /usr/local/bin/
  [cevent@hadoop213 bin]$ ll
  总用量 15464
  -rwxr-xr-x. 1 cevent cevent 4589179 7月   1 17:56 redis-benchmark
  -rwxr-xr-x. 1 cevent cevent   22225 7月   1 17:56 redis-check-aof
  -rwxr-xr-x. 1 cevent cevent   45443 7月   1 17:56 redis-check-dump
  -rwxr-xr-x. 1 cevent cevent 4693138 7月   1 17:56 redis-cli
  lrwxrwxrwx. 1 cevent cevent      12 7月   1 17:56 redis-sentinel
  -> redis-server
  -rwxr-xr-x. 1 cevent cevent 6466413 7月   1 17:56 redis-server
  -rwxrwxrwx. 1 cevent cevent     316 7月   1 13:36 xcall
  -rwxrwxrwx. 1 cevent cevent     842 7月   1 13:33 xsync
   
  
 


9.编辑redis.conf




 
  
  跳转到行尾:shift+$
   
  # Redis configuration file example
   
  # Note on units: when memory size is
  needed, it is possible to specify
  # it in the usual form of 1k 5GB 4M and
  so forth:
  #
  # 1k => 1000 bytes
  # 1kb => 1024 bytes
  # 1m => 1000000 bytes
  # 1mb => 1024*1024 bytes
  # 1g => 1000000000 bytes
  # 1gb => 1024*1024*1024 bytes
  #
  # units are case insensitive so 1GB 1Gb
  1gB are all the same.
   
  ##################################
  INCLUDES ###################################
   
  # Include one or more other config files
  here.  This is useful if you
  # have a standard template that goes to
  all Redis servers but also need
  # to customize a few per-server settings.  Include files can include
  # other files, so use this wisely.
  #
  # Notice option "include" won't
  be rewritten by command "CONFIG REWRITE"
  # from admin or Redis Sentinel. Since
  Redis always uses the last processed
  # line as value of a configuration directive,
  you'd better put includes
  # at the beginning of this file to avoid
  overwriting config change at runtime.
  #
  # If instead you are interested in using
  includes to override configuration
  # options, it is better to use include as
  the last line.
  #
  # include /path/to/local.conf
  # include /path/to/other.conf
   
  ################################
  GENERAL 
  #####################################
   
  # By default Redis does not run as a
  daemon. Use 'yes' if you need it.
  # Note that Redis will write a pid file
  in /var/run/redis.pid when daemonized.
  # Redis configuration file example
   
  # Note on units: when memory size is
  needed, it is possible to specify
  # it in the usual form of 1k 5GB 4M and
  so forth:
  #
  # 1k => 1000 bytes
  # 1kb => 1024 bytes
  # 1m => 1000000 bytes
  # 1mb => 1024*1024 bytes
  # 1g => 1000000000 bytes
  # 1gb => 1024*1024*1024 bytes
  #
  # units are case insensitive so 1GB 1Gb
  1gB are all the same.
   
  ##################################
  INCLUDES ###################################
   
  # Include one or more other config files
  here.  This is useful if you
  # have a standard template that goes to
  all Redis servers but also need
  # to customize a few per-server settings.  Include files can include
  # other files, so use this wisely.
  #
  # Notice option "include" won't
  be rewritten by command "CONFIG REWRITE"
  # from admin or Redis Sentinel. Since
  Redis always uses the last processed
  # line as value of a configuration directive,
  you'd better put includes
  # at the beginning of this file to avoid
  overwriting config change at runtime.
  #
  # If instead you are interested in using
  includes to override configuration
  # options, it is better to use include as
  the last line.
  #
  # include /path/to/local.conf
  # include /path/to/other.conf
   
  ################################
  GENERAL  #####################################
   
  # By default Redis does not run as a
  daemon. Use 'yes' if you need it.
  # Note that Redis will write a pid file
  in /var/run/redis.pid when daemonized.
  daemonize yes
   
  # When running daemonized, Redis writes a
  pid file in /var/run/redis.pid by
  # default. You can specify a custom pid
  file location here.
  pidfile /var/run/redis.pid
   
  # Accept connections on the specified port,
  default is 6379.
  # If port 0 is specified Redis will not
  listen on a TCP socket.
  port 6379  默认端口
   
   
   
  [cevent@hadoop213 redis-3.0.4]$ ps -ef | grep redis 
  查看redis启动状态
  cevent   
  3081  2992  0 21:20 pts/0    00:00:00 grep redis
   
  
 


10.启动redis




 
  
  [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf 启动redis服务
  [cevent@hadoop213 redis-3.0.4]$ redis-cli -p 6379 启动redis客户端。端口号6379
  127.0.0.1:6379> ping  进行试ping连接,成功返回pong
  PONG
  127.0.0.1:6379> set k1 hello cevent 
  非法语句
  (error) ERR syntax error
  127.0.0.1:6379> set k1 hello  redis严格按照key-value存放
  OK
  127.0.0.1:6379> get k1   获取k1
  "hello"
  127.0.0.1:6379> shutdown  停止redis
  not connected> exit
  
 


11.Redis基础-性能测试




 
  
  [cevent@hadoop213 redis-3.0.4]$ redis-benchmark 
  redis基准(测试redis性能)
   
  [cevent@hadoop213 redis-3.0.4]$ redis-benchmark 
  Writing to socket: Connection refused   启动测试报错,需要关闭防火墙
   
  [cevent@hadoop213 redis-3.0.4]$ sudo
  service iptables stop
  [sudo] password for cevent: 
   
  没有开启redis服务
  [cevent@hadoop213 redis-3.0.4]$ redis-server redis.conf 
  开启redis服务
  [cevent@hadoop213 redis-3.0.4]$ redis-benchmark 启动测试
  ====== PING_INLINE ======
   
  100000 requests completed in 0.64 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.35% <= 1 milliseconds
  99.92% <= 2 milliseconds
  99.95% <= 84 milliseconds
  99.95% <= 85 milliseconds
  100.00% <= 85 milliseconds
  155521.00 requests per second
   
  ====== PING_BULK ======
   
  100000 requests completed in 0.55 seconds
    50
  parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.55% <= 1 milliseconds
  100.00% <= 1 milliseconds
  183150.19 requests per second
   
  ====== SET ======
    100000 requests completed in 0.56 seconds  处理10万个写入请求/0.56秒
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.50% <= 1 milliseconds
  100.00% <= 1 milliseconds
  176991.16 requests per second
   
  ====== GET ======
    100000 requests completed in
  0.56 seconds  处理10万个获取请求/0.56秒
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.18% <= 1 milliseconds
  99.98% <= 2 milliseconds
  100.00% <= 2 milliseconds
  177619.89 requests per second
   
  ====== INCR ======
   
  100000 requests completed in 0.55 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.47% <= 1 milliseconds
  100.00% <= 1 milliseconds
  182481.77 requests per second
   
  ====== LPUSH ======
    100000
  requests completed in 0.61 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.16% <= 1 milliseconds
  99.77% <= 2 milliseconds
  99.83% <= 3 milliseconds
  99.85% <= 12 milliseconds
  99.91% <= 13 milliseconds
  99.95% <= 34 milliseconds
  100.00% <= 34 milliseconds
  162866.44 requests per second
   
  ====== LPOP ======
   
  100000 requests completed in 0.55 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.63% <= 1 milliseconds
  100.00% <= 1 milliseconds
  181818.17 requests per second
   
  ====== SADD ======
   
  100000 requests completed in 0.57 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.42% <= 1 milliseconds
  99.98% <= 28 milliseconds
  100.00% <= 28 milliseconds
  174216.03 requests per second
   
  ====== SPOP ======
   
  100000 requests completed in 0.54 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.57% <= 1 milliseconds
  100.00% <= 1 milliseconds
  186219.73 requests per second
   
  ====== LPUSH (needed to benchmark LRANGE)
  ======
   
  100000 requests completed in 0.56 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  99.38% <= 1 milliseconds
  99.85% <= 2 milliseconds
  99.95% <= 12 milliseconds
  100.00% <= 13 milliseconds
  100.00% <= 13 milliseconds
  177935.95 requests per second
   
  ====== LRANGE_100 (first 100 elements)
  ======
   
  100000 requests completed in 2.06 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  92.17% <= 1 milliseconds
  99.53% <= 2 milliseconds
  99.95% <= 32 milliseconds
  99.95% <= 74 milliseconds
  99.95% <= 105 milliseconds
  99.96% <= 106 milliseconds
  100.00% <= 106 milliseconds
  48449.61 requests per second
   
  ====== LRANGE_300 (first 300 elements)
  ======
   
  100000 requests completed in 5.14 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  8.19% <= 1 milliseconds
  90.99% <= 2 milliseconds
  98.45% <= 3 milliseconds
  99.93% <= 4 milliseconds
  100.00% <= 4 milliseconds
  19474.20 requests per second
   
  ====== LRANGE_500 (first 450 elements)
  ======
   
  100000 requests completed in 7.22 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  1.27% <= 1 milliseconds
  56.74% <= 2 milliseconds
  92.31% <= 3 milliseconds
  97.25% <= 4 milliseconds
  99.59% <= 5 milliseconds
  99.97% <= 6 milliseconds
  100.00% <= 6 milliseconds
  13860.01 requests per second
   
  ====== LRANGE_600 (first 600 elements)
  ======
   
  100000 requests completed in 9.40 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  0.67% <= 1 milliseconds
  20.65% <= 2 milliseconds
  75.58% <= 3 milliseconds
  89.51% <= 4 milliseconds
  94.79% <= 5 milliseconds
  98.49% <= 6 milliseconds
  99.74% <= 7 milliseconds
  99.91% <= 8 milliseconds
  99.94% <= 9 milliseconds
  99.95% <= 10 milliseconds
  99.95% <= 36 milliseconds
  99.96% <= 37 milliseconds
  99.97% <= 38 milliseconds
  100.00% <= 39 milliseconds
  100.00% <= 39 milliseconds
  10633.77 requests per second
   
  ====== MSET (10 keys) ======
   
  100000 requests completed in 1.13 seconds
   
  50 parallel clients
    3
  bytes payload
   
  keep alive: 1
   
  96.89% <= 1 milliseconds
  100.00% <= 1 milliseconds
  88652.48 requests per second
   
  
 


12.redis-默认16个库(hashSet的底层是hashMap,严格来说没有hashSet,只有hashMap)




 
  
  [cevent@hadoop213 ~]$ redis-cli -p 6379  开启客户端
  127.0.0.1:6379> ping
  PONG
  127.0.0.1:6379> get k1  当前在0库
  "hello"
  127.0.0.1:6379> select 6  当前在7库(select切换数据库)
  OK
  127.0.0.1:6379[6]> get k1  库7中没有k1内容
  (nil)
  127.0.0.1:6379[6]> select 0  转换到库0
  OK
  127.0.0.1:6379> get k1
  "hello"
   
  127.0.0.1:6379> DBSIZE  获取当前key数量
  (integer) 4   默认开启3个key
  127.0.0.1:6379> KEYS *
  1) "key:__rand_int__"
  2) "mylist"
  3) "k1"
  4) "counter:__rand_int__"
  127.0.0.1:6379> set k2 value2  创建k2
  OK
  127.0.0.1:6379> set k3 value3  创建k3
  OK
  127.0.0.1:6379> dbsize
  (integer) 6
  127.0.0.1:6379> keys *
  1) "key:__rand_int__"
  2) "mylist"
  3) "k1"
  4) "counter:__rand_int__"
  5) "k3"
  6) "k2"
   
  127.0.0.1:6379> keys k?  模糊查看k后缀?
  1) "k1"
  2) "k3"
  3) "k2"
   
  127.0.0.1:6379> FLUSHDB  清洗库(16个库数据全清)
  127.0.0.1:6379> KEYS *
  (empty list or set)
  127.0.0.1:6379> set k1 value1  创建0库k1
  OK
  127.0.0.1:6379> set k2 value2
  OK
  127.0.0.1:6379> set k3 value3
  OK
  127.0.0.1:6379> select 1  进入k1库
  OK
  127.0.0.1:6379[1]> set class cevent
  OK
  127.0.0.1:6379[1]> set k3 cevent3  创建k3值
  OK
  127.0.0.1:6379[1]> keys *
  1) "k3"
  2) "class"
  127.0.0.1:6379[1]> select 0
  OK
  127.0.0.1:6379> FLUSHALL  清洗数据
  OK
  127.0.0.1:6379> keys *
  (empty list or set)
   
  
 


  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值