OpenStack controller HA

主机分配:

主机名             IP(Static)                        系统                                           配置                          角色

controller01    192.168.20.21    CentOS-6.4-x86_64-minimal         2CPU,4G,50G,1网卡          控制节点01

controller02    192.168.20.22    CentOS-6.4-x86_64-minimal         2CPU,4G,50G,1网卡          控制节点02

myslserver      192.168.20.25    CentOS-6.4-x86_64-minimal         2CPU,4G,50G,1网卡          数据库

VIP:192.168.20.20

1.数据库配置

(1).yum安装mysql

[root@myslserver ~]# yum -y install mysql mysql-server

(2).启动数据库,设置开机启动

[root@myslserver ~]# service mysqld start

[root@myslserver ~]# chkconfig mysqld on

(3).设置数据库密码

[root@myslserver ~]# mysqladmin -uroot password 'passwd'

(4).创建keystone、glance、nova、glance数据库

[root@myslserver ~]# mysql -u root -ppasswd

mysql> create database keystone;

mysql> grant all on keystone.* to 'keystone'@'%' identified by 'keystone';

mysql> create database glance;

mysql> grant all on glance.* to 'glance'@'%' identified by 'glance';

mysql> create database nova;

mysql> grant all on nova.* to 'nova'@'%' identified by 'nova'; 

mysql> create database cinder;

mysql> grant all on cinder.* to 'cinder'@'%' identified by 'cinder'; 

mysql> flush privileges;

mysql> quit;

2.控制节点配置

以下为Controller配置,两个节点配置基本相同,在此以controller01配置举例:

配置说明:

黑色:表示controller01,02相同配置。

蓝色:表示controller01的配置。

绿色:表示controller02的配置。

橙色:只在一个节点上执行即可。

2.1.初始化配置

(1).设置hosts文件

[root@controller01 ~]# vi /etc/hosts

192.168.20.20   controller

192.168.20.21   controller01

192.168.20.22   controller02

192.168.20.25   mysqlserver

(2).设置dns解析,便于yum更新

[root@controller01 ~]# vi /etc/resolv.conf 

nameserver 202.106.0.20

nameserver 202.96.69.38

nameserver 8.8.8.8

(3).yum管理工具

[root@controller01 ~]# yum -y install wget parted ntpdate

(4).磁盘分区,用于gluster存储

[root@controller01 ~]# parted /dev/vda

GNU Parted 2.1

使用 /dev/vda

Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) print free

Model: Virtio Block Device (virtblk)

Disk /dev/vda: 53.7GB

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Number  Start   End     Size    Type     File system     标志

        32.3kB  1049kB  1016kB           Free Space

 1      1049kB  211MB   210MB   primary  ext4            启动

 2      211MB   20.4GB  20.2GB  primary  ext4

 3      20.4GB  21.5GB  1074MB  primary  linux-swap(v1)

        21.5GB  53.7GB  32.2GB           Free Space

(parted) mkpart                                                           

分区类型?  primary/主分区/extended/扩展分区? extended                    

起始点? 21.5G                                                            

结束点? 53.7G                                                            

警告: WARNING: the kernel failed to re-read the partition table on /dev/vda (设备或资源忙).  As a result, it may not reflect all of your changes until after reboot.

(parted) mkpart                                                           

分区类型?  logical/逻辑分区? logical                                     

文件系统类型?  [ext2]? ext4                                              

起始点? 21.5G                                                            

结束点? 42.7G                                                            

警告: WARNING: the kernel failed to re-read the partition table on /dev/vda (设备或资源忙).  As a result, it may not reflect all of your changes until after reboot.

(parted) mkpart                                                           

分区类型?  logical/逻辑分区? logical                                     

文件系统类型?  [ext2]? ext4                                              

起始点? 42.7G                                                            

结束点? 53.7G                                                            

警告: WARNING: the kernel failed to re-read the partition table on /dev/vda (设备或资源忙).  As a result, it may not reflect all of your changes until after reboot.

(parted) print free                                                       

Model: Virtio Block Device (virtblk)

Disk /dev/vda: 53.7GB

Sector size (logical/physical): 512B/512B

Partition Table: msdos

Number  Start   End     Size    Type      File system     标志

        32.3kB  1049kB  1016kB            Free Space

 1      1049kB  211MB   210MB   primary   ext4            启动

 2      211MB   20.4GB  20.2GB  primary   ext4

 3      20.4GB  21.5GB  1074MB  primary   linux-swap(v1)

 4      21.5GB  53.7GB  32.2GB  extended                  lba

 5      21.5GB  42.7GB  21.2GB  logical

 6      42.7GB  53.7GB  11.0GB  logical

(parted) quit      

[root@controller01 ~]# reboot

[root@controller01 ~]# mkfs -t ext4 /dev/vda5

[root@controller01 ~]# mkdir /data

[root@controller01 ~]# echo '/dev/vda5                                 /data                   ext4    defaults        0 0' >> /etc/fstab

[root@controller01 ~]# mount /dev/vda5 /data

2.2.安装配置Gluster

(1).设置gluster源

[root@controller01 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

(2).yum安装gluster

[root@controller01 ~]# yum -y install glusterfs-server glusterfs

(3).启动gluster服务,设置开机启动

[root@controller01 ~]# service glusterd start

[root@controller01 ~]# chkconfig glusterd on

(4).添加存储节点

[root@controller01 ~]# gluster peer probe controller02

[root@controller02 ~]# gluster peer probe controller01

(5).查看节点状态

[root@controller01 ~]# gluster peer status

(6).创建volume并启动

[root@controller01 ~]# gluster volume create vol-storage replica 2 controller01:/data/gluster controller02:/data/gluster

[root@controller01 ~]# gluster volume start vol-storage 

(7).关闭gluster的nfs属性

[root@controller01 ~]# gluster volume set vol-storage nfs.disable on

(8).挂载volume

[root@controller01 ~]# mkdir -p /openstack

[root@controller01 ~]# mount -t glusterfs controller01:/vol-storage /openstack

[root@controller01 ~]# echo "controller01:/vol-storage /openstack glusterfs defaults,_netdev 0 0" >> /etc/fstab

[root@controller02 ~]# mount -t glusterfs controller02:/vol-storage /openstack

[root@controller02 ~]# echo "controller02:/vol-storage /openstack glusterfs defaults,_netdev 0 0" >> /etc/fstab

2.3.安装配置haproxy,keepalived

(1).修改sysctl.conf配置文件

[root@controller01 ~]# vi /etc/sysctl.conf

net.ipv4.ip_forward = 1

net.ipv4.ip_nonlocal_bind = 1

net.ipv4.conf.all.rp_filter = 0

net.ipv4.conf.default.rp_filter = 0

[root@controller01 ~]# sysctl -p

(2).yum安装haproxy,keepalived

[root@controller01 ~]# yum -y install keepalived haproxy

(3).配置keepalived

[root@controller01 ~]# cp -av /etc/keepalived/keepalived.conf  /etc/keepalived/keepalived.conf_bak

[root@controller01 ~]# echo "" > /etc/keepalived/keepalived.conf 

[root@controller01 ~]# vi /etc/keepalived/keepalived.conf 

vrrp_script haproxy-check {

    script "killall -0 haproxy"

    interval 2

    weight 10

}

vrrp_instance openstack-vip {

    state BACKUP

    priority 102

    interface eth0

    virtual_router_id 80

    advert_int 3

    virtual_ipaddress {

        192.168.20.20

    }

    track_script {

        haproxy-check

    }

}

(4).配置haproxy

[root@controller01 ~]# cp -av /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg_bak

[root@controller01 ~]# echo "" > /etc/haproxy/haproxy.cfg

[root@controller01 ~]# vi /etc/haproxy/haproxy.cfg

global

    daemon


defaults

    mode http

    maxconn 10000

    timeout connect 10s

    timeout client 10s

    timeout server 10s


frontend horizon-http-vip

    bind 192.168.20.20:80

    default_backend horizon-http-api


frontend keystone-admin-vip

    bind 192.168.20.20:35357

    default_backend keystone-admin-api


frontend keystone-public-vip

    bind 192.168.20.20:5000

    default_backend keystone-public-api


frontend quantum-vip

    bind 192.168.20.20:9696

    default_backend quantum-api


frontend glance-vip

    bind 192.168.20.20:9191

    default_backend glance-api


frontend glance-registry-vip

    bind 192.168.20.20:9292

    default_backend glance-registry-api


frontend nova-ec2-vip

    bind 192.168.20.20:8773

    default_backend nova-ec2-api


frontend nova-novnc-vip

    bind 192.168.20.20:6080

    default_backend nova-novnc-api


frontend nova-compute-vip

    bind 192.168.20.20:8774

    default_backend nova-compute-api


frontend nova-metadata-vip

    bind 192.168.20.20:8775

    default_backend nova-metadata-api


frontend cinder-vip

    bind 192.168.20.20:8776

    default_backend cinder-api


backend horizon-http-api

    balance roundrobin

    server controller01 192.168.20.21:80 check inter 10s

    server controller02 192.168.20.22:80 check inter 10s


backend keystone-admin-api

    balance roundrobin

    server controller01 192.168.20.21:35357 check inter 10s

    server controller02 192.168.20.22:35357 check inter 10s


backend keystone-public-api

    balance roundrobin

    server controller01 192.168.20.21:5000 check inter 10s

    server controller02 192.168.20.22:5000 check inter 10s


backend quantum-api

    balance roundrobin

    server controller01 192.168.20.21:9696 check inter 10s

    server controller02 192.168.20.22:9696 check inter 10s


backend glance-api

    balance roundrobin

    server controller01 192.168.20.21:9191 check inter 10s

    server controller02 192.168.20.22:9191 check inter 10s


backend glance-registry-api

    balance roundrobin

    server controller01 192.168.20.21:9292 check inter 10s

    server controller02 192.168.20.22:9292 check inter 10s


backend nova-ec2-api

    balance roundrobin

    server controller01 192.168.20.21:8773 check inter 10s

    server controller02 192.168.20.22:8773 check inter 10s


backend nova-novnc-api

    balance roundrobin

    server controller01 192.168.20.21:6080 check inter 10s

    server controller02 192.168.20.22:6080 check inter 10s


backend nova-compute-api

    balance roundrobin

    server controller01 192.168.20.21:8774 check inter 10s

    server controller02 192.168.20.22:8774 check inter 10s


backend nova-metadata-api

    balance roundrobin

    server controller01 192.168.20.21:8775 check inter 10s

    server controller02 192.168.20.22:8775 check inter 10s


backend cinder-api

    balance roundrobin

    server controller01 192.168.20.21:8776 check inter 10s

    server controller02 192.168.20.22:8776 check inter 10s

(5).启动haproxy、keepalived服务

[root@controller01 ~]# chkconfig haproxy on

[root@controller01 ~]# chkconfig keepalived on

[root@controller01 ~]# service haproxy start

[root@controller01 ~]# service keepalived start

(6).查看服务状态

[root@controller01 ~]# netstat -antp | grep haproxy

[root@controller01 ~]# ip -o -f inet addr show


转载于:https://www.cnblogs.com/myiaas/p/4161314.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值