1.安装虚拟机:
使用iso映像文件;
开启虚拟机;
选择skip;
配置网络 编辑 设置自动连接;
安装环境
yum install gcc-c++
yum install pcre-devel
yum install zlib-devel
yum install openssl-devel;
yum install git
yum install wget
yum install tree
yum install perl-devel
----------------
1.网络配置固定IP (静态IP) 172.21.148.223 169.254.209.185
网卡地址:
/etc/sysconfig/network-scripts/ifcfg-eth0
修改步骤
vi /etc/sysconfig/network-scripts/ifcfg-eth0
进入vi文本编辑器下 进入选择模式
1.移动上下左右键 可以移动光标将光标移动到需要修改的位置
2.点击键盘的i进入插入或者修改模式,此时就可以修改文件了
修改
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.1.*
NETMASK=255.255.255.0
3:点击esc键进入选择模式
4:键入":"进入命令行模式
5:键入wq点击回车保存并退出[wq!]
2.启动网卡
#ifup eth0
3.重启网络服务
#service network restart
---
关闭防火墙
service iptables stop
service iptables status
chkconfig iptables off
----------------------
安装jdk
rpm -ivh jdk-7u79-linux-i586.rpm
vi .bashrc 在根目录下设置
CLASSPATH=.
JAVA_HOME=/usr/java/latest
PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH
export JAVA_HOME
export PATH
wq保存退出,执行source .bashrc重新加载用户环境变量
---------------------------
tomcat安装
tar -zxf software/apache-tomcat-7.0.64.tar.gz
./apache-tomcat-7.0.64/bin/startup.sh 运行
默认安装目录 /usr/java
---------------------
nginx安装
tar -zxf nginx1.11.1.tar.gz
tar -zxf nginx-sticky-module-1.1.tar.gz 菜单、ho
---------------
修改nginx-sticky-misc.c文件
281行改为
digest->len = ngx_sock_ntop(in,sizeof(struct sockaddr_in),digest->data, len, 1);
修改ngx_http_sticky_module.c文件
在开头加
#include <nginx.h>
在333行修改为
#if defined(nginx_version) && nginx_version >= 1009000
iphp->rrp.current = peer;
#else
iphp->rrp.current = iphp->selected_peer;
#endif
编译nginx
./configure --add-module=/root/nginx-sticky-module1.1
make
make install
启动nginx
./sbin/nginx
重启nginx
./sbin/nginx -s reload
强制关闭nginx
./sbin/nginx -s stop
优雅关闭nginx
./sbin/nginx -s quit
查看nginx进程
ps -aux | grep nginx
配置nginx.conf中的文件实现tomcat轮询
http{
upstream 域名或者IP{
server 192.168.19.131:8080 weight=1;
server 192.168.19.131:8081 weight=1;
#server 192.168.19.131:8082 down;表示不可用
#server 192.168.19.131:8083 backup;表示其他机器down或者忙
#开启cookie的黏着
sticky;
}
server{
listen 80;
location /{
proxy_pass http://域名或者IP;
}
}
}
---------------------
安装libfastcommon
tar zxf V1.0.35.tar.gz
cd libfastcommon-1.0.35/
./make.sh
./make.sh install
安装fastDFS
tar -zxf V5.10.tar.gz
cd fastdfs-5.10/
./make.sh
./make.sh install
提示:当软件安装结束后,默认FastDFS启动所需的配置文件放置在/etc/fdfs目录下。
创建目录
mkdir -p /data/fdfs/{tracker,storage/store}
mkdir /data/fastdht
tree /data/ 验证的
创建启动所需要的配置文件
cp /etc/fdfs/tracker.conf.sample /etc/fdfs/tracker.conf
cp /etc/fdfs/storage.conf.sample /etc/fdfs/storage.conf
cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf
tree /etc/fdfs/ 验证的
配置tracker
base_path=/data/fdfs/tracker
配置Storage
base_path=/data/fdfs/storage
store_path0=/data/fdfs/storage/store
tracker_server=192.168.19.131:22122
tracker_server=192.168.19.128:22122
配置Client
base_path=/tmp
tracker_server=192.168.19.131:22122
tracker_server=192.168.19.128:22122
---------------------
启动服务需要
/etc/init.d/fdfs_trackerd {start|stop|status|restart|condrestart
/etc/init.d/fdfs_storaged {start|stop|status|restart|condrestart}
-------------------------
集成Nginx代理服务器
安装fastdfs-nginx-module.gif
git clone https://github.com/happyfish100/fastdfs-nginx-module.git
cd nginx-1.11.1
./configure --prefix=/usr/local/nginx-1.11.1/ --add-module=/root/fastdfs-nginx-module/src/
make && make install
拷贝配置文件
[root@CentOS ~]# cp /root/anzhuangbao/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/
[root@CentOS ~]# cp /root/anzhuangbao/fastdfs-5.10/conf/http.conf /etc/fdfs/
[root@CentOS ~]# cp /root/anzhuangbao/fastdfs-5.10/conf/anti-steal.jpg /etc/fdfs/
[root@CentOS ~]# cp /root/anzhuangbao/fastdfs-5.10/conf/mime.types /etc/fdfs/
1.修改nginx.conf
server{
..
location /group1/M00 {
root /data/fdfs/storage/store;
ngx_fastdfs_module;
}
}
2.修改/etc/fdfs/mod_fastdfs.conf配置文件
tracker_server=192.168.149.128:22122
tracker_server=192.168.149.130:22122
group_name=group1
url_have_group_name = true
store_path0=/data/fdfs/storage/store
3.启动nginx 访问文件
安装BerkeleyDB 下载db-4.7.25.tar.gz
[root@CentOS ~]# tar -zxf db-4.7.25.tar.gz
[root@CentOS ~]# cd db-4.7.25
[root@CentOS db-4.7.25]# cd build_unix/
[root@CentOS build_unix]# ./../dist/configure
[root@CentOS build_unix]# make
[root@CentOS build_unix]# make install
2.安装FastDHT
[root@CentOS ~]# tar zxf FastDHT_v2.01.tar.gz
[root@CentOS ~]# cd FastDHT
[root@CentOS FastDHT]# ./make.sh
[root@CentOS FastDHT]# ./make.sh install
tree /etc/fdht/ 验证
1.修改fdhtd.conf
base_path=/data/fastdht
2.修改fdht_servers.conf
group_count = 2
group0 = 192.168.145.150:11411
group1 = 192.168.145.151:11411
3.修改fdht_client.conf配置文件
base_path=/tmp/
4.启动FDHT服务 先创建
mkdir /data/fastdht
/usr/local/bin/fdhtd /etc/fdht/fdhtd.conf
fdht_set /etc/fdht/fdht_client.conf bbs:happyfish name=yq,sex=M;
fdht_get /etc/fdht/fdht_client.conf bbs:happyfish name,sex,mail
修改etc/fdfs/storage.conf配置文件
check_file_duplicate=1
keep_alive=1
#include /etc/fdht/fdht_servers.conf
分别启动fdhtd服务、fast
[root@CentOS usr]# /usr/local/bin/fdhtd /etc/fdht/fdhtd.conf
[root@CentOS usr]# /etc/init.d/fdfs_trackerd start
[root@CentOS usr]# /etc/init.d/fdfs_storaged start
--------------------------------------------
nginx管理文件存储
vi /conf/nginx.conf
http{
upstream backend1{//group1 组
server 192.168.149.129:80;
}
upstream backend2{//group2 组
server 192.168.149.130:80;
}
server {
listen 80;
server_name localhost;
...
location ~* ^/group1.* {
proxy_pass http://backend1;
}
location ~* ^/group2.* {
proxy_pass http://backend2;
}
location / {
root html;
index index.html index.htm;
}
}
}
------------------------------------------------
redis
tar -zxf redis-2.8.6.tar.gz
cd redus-2.8.6
make
make install
mkdir /etc/redis
mkdir /var/redis
cp utils/redis_init_script /etc/iniot.d/redis
修改/etc/init.d/redis
头部加
#chkconfig : 345 60 60
mkdir /vat/redis/6379
cp redis.conf /etc/redis/6379.conf
vi /etc/redis/6379.conf
设置daemonzize为yes
设置pidfile为/var/run/redis_6379.conf
设置logfile为/var/log/redis_6379.log
chkconfig --add redis
chkconfig redis on
service redis start
service redis stop
-------------------
redis-sentinel
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1
Sentinel 启动
redis-sentinel sentinel.conf
redis-server sentinel.conf --sentinel
注意关闭安全模式
----------------------
Redis 3.0集群
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
1、修改配置文件
如果伪分布式需要修改端口
./redis-trib.rb create --replicas 1 ip和端口列表 至少六台机器
./redis-trib.rb check 127.0.0.1:7000
./redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000
./redis-trib.rb reshard 127.0.0.1:7000
./redis-trib del-node 127.0.0.1:7000 `<node-id>`
redis-cli -c -h 192.168.19.128 -p 6380测试3.0集群 自动切换到对应的服务
-------------------------------------
安装SSDB
wget --no-check-certificate https://github.com/ideawu/ssdb/archive/master.zip
yum install unzip
yum install -y autoconf
unzip master.zip
cd ssdb-master/
make && makeinstall
cp ssdb-master/tools/ssdb.sh /usr/local/ssdb/
修改配置文件ssdb.conf
server:
ip:192.168.19.128
port:8888
修改ssdb.sh
configs="/usr/local/ssdb/ssdb.conf"
启动文件
./ssdb.sh start
连接服务
./ssdb-cli -h 192.168.19.128 -p 8888
-------------------------------------
memcache安装
tar -zxf memcached-1.4.31.tar.gz -C /usr
cd /usr/ memcached-1.4.31
./configure --prefix=/usr/local/memcache
./make
./make install
启动memcache 服务
./bin/memcached -p 11211 -u root –vvv 前台(Ctrl+c 终止)
./bin/memcached -p 11211 -u root –d 后台(kill -9 )
---------------------
memcache做session管理
a) Memcached-session-manager-${version}.jar
b) Memcached-session-manager-tc6|7|8-${version}.jar(适配器模式)
c) Spymemcached-2.11.1.jar
d)还有一堆jar包拷在
在conf/context.xml 中配置拷贝到tomcat 的lib 目录下
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:192.168.0.114:11211,n2:192.168.0.114:11212"
sticky="false"
sessionBackupAsync="false"
lockingMode="uriPattern:/path1|/path2"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
--------------------------
1. 安装MongoDB
a) tar -zxf mongodb-linux-i686-3.0.6.gz -C /usr/local
b) 启动mongod服务
./bin/mongod --port 27017 --dbpath /root/mongodb/data/ --journal
构建dbpath mkdir mongodb/data
c) 连接mongoDB
i. ./bin/mongo --port 27017
安装64位的
//创建mongodb的yum安装源
1. touch /etc/yum.repos.d/mongodb-org-3.4.repo
2. vi /etc/yum.repos.d/mongodb-org-3.4.repo
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/6/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
[root@CentOS ~]# yum install mongodb-org
常识技巧:
如何实现保留yum下载的rpm文件
下载个插件 yum install yum-plugin-downloadonly –y
Yum reinstall mongodb-org –downloadonlyu –downloaddir=mongodb-org
------------------------------
副本集合搭建:mkdir rep1 rep2 rep3
./mongod --dbpath rep1/ --port 27017 --replSet wcy
./mongod --dbpath rep2/ --port 27018 --replSet wcy
./mongod --dbpath rep3/ --port 27019 --replSet wcy
rs.initiate({ _id:"wcy", members:[ {_id:0,host:"192.168.19.131:27017"}]})
rs,inMaster() / rs.status()
--------------------------------------
所有集群的修改必须在主机上
可以继续添加机器 直接rs.add("192.168.19.131:27010")
var conf=rs.config();获取副本集的配置 是个json rs.reconfig()
conf.member();
里面的priority为0时永远是从机不会被选举
hidden是true时不对外进行服务
votes投票选举票数越多,在选举过程中分量更重。如果mongoDB副本集超过7台 必须设置votes0
环境搭建:10 台机器 分片
三个副本集合 (2个shard副本集合 一个configServer副本集)
1.路由服务
--------------------------------------------
shardsvr:
mkdir shard1-1 shard1-2 shard1-3
./mongod --dbpath shard1-1/ --port 27017 --replSet shard1 --shardsvr
./mongod --dbpath shard1-2/ --port 27018 --replSet shard1 --shardsvr
./mongod --dbpath shard1-3/ --port 27019 --replSet shard1 --shardsvr
rs.initiate({ _id:"shard1", members:[ {_id:0,host:"192.168.19.131:27017"}]})
mkdir shard2-1 shard2-2 shard2-3
./mongod --dbpath shard2-1/ --port 27027 --replSet shard2 --shardsvr
./mongod --dbpath shard2-2/ --port 27028 --replSet shard2 --shardsvr
./mongod --dbpath shard2-3/ --port 27029 --replSet shard2 --shardsvr
rs.initiate({ _id:"shard2", members:[ {_id:0,host:"192.168.19.131:27027"}]})
--configsvr
早期
./mongod -dbpath config1/ --port 27037 --configsvr
./mongod -dbpath config2/ --port 27038 --configsvr
./mongod -dbpath config3/ --port 27038 --configsvr
现在
current:
mongod -dbpath config1/ --port 27037 --replSet conf1 --configsvr
mongod -dbpath config2/ --port 27038 --replSet conf1 --configsvr
mongod -dbpath config3/ --port 27038 --replSet conf1 --configsvr
--shardsvr路由服务:
早期:
./mongo --configdb 192.168.19.131:27037 ,192.168.19.131:27038,192.168.19.131:27039 --port 8000
现在:
./mongo --configdb conf1/ip:port ,ip:port,ip:port --port 8000
初始化分片
sh.addShard("shard1/192.168.19.131:27017")
sh.addShard("shard2/192.168.19.131:27027")
分表 分片方式 边间分区
sh.enableSharding("wcy");
sh.shardCollection("wcy.t_user",{_id:1}) 数据可能部分集中
sh.shardCollection("wcy.t_email",{_id:"hashed"}) 数据比较均匀