阿里云服务器部署方案
----------------------------基础部署-------------------------------------
#################挂载数据盘::
1.查看需要挂载的数据光盘fdisk -l  显示/dev/xvdb为可单挂载的数据库,否则为无数据盘可挂载
挂载方法
fdisk -S 56 /dev/xvdb
根据提示,依次输入“n”,“p”“1”,两次回车,“wq”,分区就开始了,很快就会完成。
此时再使用“fdisk -l”命令可以看到,新的分区xvdb1已经建立完成了
mkfs.ext4 /dev/xvdb1 对其进行格式化
mkdir -pv /storage
echo '/dev/xvdb1  /storage ext4   defaults    0  0' >> /etc/fstab
mount -a
df -h 查看验证
--------------------------应用部署-------------------------------------------
mkdir -pv /storage/home/
mkdir -pv /storage/local/
#####部署软件
####Jdk安装
rpm –ivh  jdk-7u67-linux-x64.rpm
vi /root/.bash_profile
添加内容
JAVA_HOME=/usr/java/jdk1.7.0_67
PATH=$JAVA_HOME/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export PATH
export CLASSPATH
#使变量文件生效
source /root/.bash_profile
测试:java  –version
####Tomcat安装
#安装目录 /storage/local/
#解压tomcat,并复制到/opt下
tar zxf apache-tomcat-7.0.54.tar.gz
cp -a apache-tomcat-7.0.54 /storage/local/tomcat1,端口为8080
cp -a apache-tomcat-7.0.54 /storage/local/tomcat2  端口为8082
cp -a apache-tomcat-7.0.54 /storage/local/tomcat2   端口为8083
具体端口为tomcat配置文件为
#tomcat1配置:
tomcat1/conf/server.xml修改内容:以下tomcat的三个端口
shutdown端口: <Server port="8005" shutdown="SHUTDOWN">
http端口:<Connector port="8080" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" />
AJP端口:<Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />
tomcat2配置:
tomcat2/conf/server.xml修改内容:以下tomcat的三个端口
shutdown端口: <Server port="8025" shutdown="SHUTDOWN">
http端口:<Connector port="8082" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" />
AJP端口:<Connector port="8029" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />
tomcat3配置:
tomcat3/conf/server.xml修改内容:以下tomcat的三个端口
shutdown端口: <Server port="8035" shutdown="SHUTDOWN">
http端口:<Connector port="8083" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" />
AJP端口:<Connector port="8039" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" />


##sessiong共享
加入redis的lib包commons-pool-1.6、jedis-2.1.0、tomcat-redis-session-manager-1.2-tomcat-7
tomcat1/conf/context.xml
####context.xml##############################################
<?xml version='1.0' encoding='utf-8'?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<!-- The contents of this file will be loaded for each web application -->
<Context>

    <!-- Default set of monitored resources -->
    <WatchedResource>WEB-INF/web.xml</WatchedResource>

    <!-- Uncomment this to disable session persistence across Tomcat restarts -->
    <!--
    <Manager pathname="" />
    -->

    <!-- Uncomment this to enable Comet connection tacking (provides events
         on session expiration as well as webapp lifecycle) -->
    <!--
    <Valve className="org.apache.catalina.valves.CometConnectionManagerValve" />
    -->
<Valve className="com.radiadesign.catalina.session.RedisSessionHandlerValve" />
<Manager className="com.radiadesign.catalina.session.RedisSessionManager"
         host="192.168.1.26"
         port="6379"
         database="0"
         maxInactiveInterval="60"/>
</Context>

cd/storage/local/tomcat1/bin/
./startup.sh

验证是否启动:
tailf /storage/local/tomcat1/logs/catalina.out
浏览器登录tomcat看session id是否一致
-------------------------公司单独软件部署------------------------------------
#####zookeeper单机安装
解压zookeeper到安装目录
tar xf zookeeper-3.4.6.tar.gz
cd /storage/local/zookeeper-3.4.6/conf/
vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/storage/local/zookeeper/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

启动:bin/zkServer.sh start
客户端测试:
bin/zkCli.sh -server localhost:2181

伪集群模式::

所谓伪集群, 是指在单台机器中启动多个zookeeper进程, 并组成一个集群. 以启动3个zookeeper进程为例.

将zookeeper的目录拷贝2份:
    |--zookeeper0  
    |--zookeeper1  
    |--zookeeper2  
 更改zookeeper0/conf/zoo.cfg文件为:
    tickTime=2000    
    initLimit=5    
    syncLimit=2    
    dataDir=/Users/apple/zookeeper0/data    
    dataLogDir=/Users/apple/zookeeper0/logs    
    clientPort=4180  
    server.0=127.0.0.1:8880:7770    
    server.1=127.0.0.1:8881:7771    
    server.2=127.0.0.1:8882:7772  

新增了几个参数, 其含义如下:
    initLimit: zookeeper集群中的包含多台server, 其中一台为leader, 集群中其余的server为follower. initLimit参数配置初始化连接时, follower和leader之间的最长心跳时间. 此时该参数设置为5, 说明时间限制为5倍tickTime, 即5*2000=10000ms=10s.
    syncLimit: 该参数配置leader和follower之间发送消息, 请求和应答的最大时间长度. 此时该参数设置为2, 说明时间限制为2倍tickTime, 即4000ms.
    server.X=A:B:C 其中X是一个数字, 表示这是第几号server. A是该server所在的IP地址. B配置该server和集群中的leader交换消息所使用的端口. C配置选举leader时所使用的端口. 由于配置的是伪集群模式, 所以各个server的B, C参数必须不同.

参照zookeeper0/conf/zoo.cfg, 配置zookeeper1/conf/zoo.cfg, 和zookeeper2/conf/zoo.cfg文件. 只需更改dataDir, dataLogDir, clientPort参数即可.

在之前设置的dataDir中新建myid文件, 写入一个数字, 该数字表示这是第几号server. 该数字必须和zoo.cfg文件中的server.X中的X一一对应.
/Users/apple/zookeeper0/data/myid文件中写入0, /Users/apple/zookeeper1/data/myid文件中写入1, /Users/apple/zookeeper2/data/myid文件中写入2.

分别进入/Users/apple/zookeeper0/bin, /Users/apple/zookeeper1/bin, /Users/apple/zookeeper2/bin三个目录, 启动server.
任意选择一个server目录, 启动客户端:
    bin/zkCli.sh -server localhost:4180
    
集群模式::
集群模式的配置和伪集群基本一致.
由于集群模式下, 各server部署在不同的机器上, 因此各server的conf/zoo.cfg文件可以完全一样.
下面是一个示例:
    tickTime=2000    
    initLimit=5    
    syncLimit=2    
    dataDir=/home/zookeeper/data    
    dataLogDir=/home/zookeeper/logs    
    clientPort=4180  
    server.43=10.1.39.43:2888:3888  
    server.47=10.1.39.47:2888:3888    
    server.48=10.1.39.48:2888:3888   

示例中部署了3台zookeeper server, 分别部署在10.1.39.43, 10.1.39.47, 10.1.39.48上. 需要注意的是, 各server的dataDir目录下的myid文件中的数字必须不同.
10.1.39.43 server的myid为43, 10.1.39.47 server的myid为47, 10.1.39.48 server的myid为48.







数据库
root/19911006
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| crm                |
| jct                |
| mysql              |
| peisong            |
| performance_schema |
| product            |
| safe               |
| test               |

nginx部署:
yum -y install pcre-*  openssl openssl-devel
./configure --user=www --group=www --prefix=/storage/local/nginx --with-http_stub_status_module --with-http_ssl_module
make && make install