Swift+HAporxy+Keepalived对象存储搭建


Skip to end of metadat Go to start of metadata

 
Swift HA结构图
包括组件:
Swift: 
1.proxy servers (swift-proxy-server)
 2.Account servers (swift-account-server)
 3.Container servers (swift-container-server)
 4.Object servers (swift-object-server)
 5.Configurable WSGI middleware that handles authentication. Usually the Identity Service.
HA Service
 Hapoxry servers
 Keepalived server
Identity Service
  Keystone servers
一:环境的描述
网络描述
Public: 192.168.128.0/24
Data Network:10.6.0.0/24
 Replication Network:10.7.0.0/24
Vip:192.168.128.55/32
角色分配:
Keystone  192.168.128.35
Haproxy01  192.168.128.51/10.6.0.121
Haproxy02  192.168.128.52/10.6.0.122
Swift-proxy01 192.168.128.53/10.6.0.123
Swift-proxy02 192.168.128.54/10.6.0.124
Swift-storage01 192.168.128.56/10.6.0.126/10.7.0.126
Swift-storage02 192.168.128.57/10.6.0.127/10.7.0.127
Swift-storage03 192.168.128.58/10.6.0.128/10.7.0.128
建议hosts文件配置如下
192.168.128.35    controller    
192.168.128.51    haproxy01
192.168.128.52    haproxy02
192.168.128.53    swift_proxy01
192.168.128.54    swift_proxy02
192.168.128.56    swift_storage01
192.168.128.57    swift_storage02
192.168.128.58    swift_storage03

创建环境变量(all)
Cat swiftrc
export OS_USERNAME=swift
export OS_PASSWORD=password
export OS_TENANT_NAME=service
export OS_AUTH_URL=http://controller:35357/v2.0
【keystone node】
ip:192.168.128.35
Hostname:controller
User:swift
Password:password
Tenant:service
【haproxy node 1】
 Ip:192.168.128.51
Hostname:haproxy01
【haproxy node 2】
 Ip:192.168.128.52
Hostname:haproxy02
【Swift_haproxy node 1】
Ip:192.168.128.53
Hostname:swift_haproxy01
【Swift_haproxy node 2】
Ip:192.168.128.54
Hostname:swift_haproxy02
【Storage node 1】
Ip:192.168.128.56
Hostname:swift_storage01
【Storage node 2】
Ip:192.168.128.57
Hostname:swift_storage02
【Storage node3】
Ip:192.168.128.58
Hostname:swift_storage03

二:yum 源的安装与软件包的更新(全部节点)
1.    yum源安装:(全部节点)
  wget http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm
 wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
2.    安装软件包
  #rpm –Uvh rdo-release-havana-6.noarch.rpm
  #rpm –Uvh epel-release-6-8.noarch.rpm
3.    更新yum源及软件包
  # yum upgrade && reboot
三:创建对象存储用户并且验证(keystone node )
1.安装openstack-utils
# yum install openstack-utils
2.使用keystone验证系统,创建用户、角色与租户,并且联系起来
#keystone user-create --name=swift --pass=password --email=swift@163.com
# keystone user-role-add --user=swift --tenant=service --role=admin
 3.创建对象存储的服务
 # keystone service-create --name=swift --type=object-store  --description="Object Storage Service"
4.    指定对象存储endpoint 的API(public API、internal API、admin API)【vip 地址】
 # keystone endpoint-create --service-id=$( keystone service-list |awk ' /object-store/ {print $2}')\
 --publicurl='http://192.168.128.55:8080/v1/AUTH_%(tenant_id)s'\
 --internalurl='http://192.168.128.55:8080/v1/AUTH_%(tenant_id)s'\
 --adminurl=http://192.168.128.55:8080

+-------------+---------------------------------------------------+
|   Property  |                       Value                       |
+-------------+---------------------------------------------------+
|   adminurl  |        http://192.168.128.55:8080/                |
|      id     |          9e3ce428f82b40d38922f242c095982e         |
| internalurl | http://192.168.128.55:8080/v1/AUTH_%(tenant_id)s  |
|  publicurl  | http://192.168.128.55:8080/v1/AUTH_%(tenant_id)s  |
|    region   |                     regionOne                     |
|  service_id |          eede9296683e4b5ebfa13f5166375ef6         |
+-------------+---------------------------------------------------+
5.    Create the configuration directory on all nodes, In addition to haproxy and keystone:
  #mkdir –p /etc/swift
#chown –R /etc/swift
6.    拷贝proxy 节点 /etc/swift/swift.conf on all nodes ,In addition to haproxy and keystone:

四:配置安装storage node 
 【Stoarage node 1】
1.    安装相关软件包
 # yum install openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
2.    配置xfs文件系统以支持Swift存储
添加一块磁盘
dd if=/dev/zero of=/home/object-swift bs=1 count=0 seek=100G
#losetup /dev/loop2 /home/object-swift
#echo "losetup /dev/loop2 /home/object-swift" >> /etc/rc.local
以上可以在没有多余硬盘情况下虚拟
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
# mkdir -p /srv/node/sdb1
# mount /srv/node/sdb1
# chown -R swift:swift /srv/node
3.    Create/etc/rsyncd.conf:
uid = swift
gid = swift
log file = /var/log/rsyncd.log 
pid file = /var/run/rsyncd.pid
address = 10.7.0.126 #  Replication Network
[account]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
Note
The rsync service requires no authentication, so run it on a local, private network.
  
4.    编辑/etc/xinetd.d/rsync
  disable = false
5.    Start the xinetd service:
  Service xinetd start  
6.    创建相关目录并赋予相关权限 
# mkdir -p /var/swift/recon
# chown -R swift:swift /var/swift/recon
【Stoarage node 2】
1.    安装相关软件包
 # yum install openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
2.    配置xfs文件系统以支持Swift存储
添加一块硬盘
dd if=/dev/zero of=/home/object-swift bs=1 count=0 seek=100G
#losetup /dev/loop2 /home/object-swift
#echo "losetup /dev/loop2 /home/object-swift" >> /etc/rc.local
以上可以在没有多余硬盘情况下虚
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs= 8 0 0" >> /etc/fstab
# mkdir -p /srv/node/sdb1
# mount /dev/sdb /srv/node/sdb1
# chown -R swift:swift /srv/node
3.    Create/etc/rsyncd.conf:
uid = swift
gid = swift
log file = /var/log/rsyncd.log 
pid file = /var/run/rsyncd.pid
address = 10.7.0.127 #  Replication Network
[account]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
Note
The rsync service requires no authentication, so run it on a local, private network.
  
4.    编辑/etc/xinetd.d/rsync
  disable = false
5.    Start the xinetd service:
  Service xinetd start  
6.    创建相关目录并赋予相关权限 
# mkdir -p /var/swift/recon
# chown -R swift:swift /var/swift/recon
【Stoarage node 3】
1.    安装相关软件包
 # yum install openstack-swift-account openstack-swift-container openstack-swift-object xfsprogs xinetd
2.    配置xfs文件系统以支持Swift存储
添加一块硬盘
dd if=/dev/zero of=/home/object-swift bs=1 count=0 seek=100G
#losetup /dev/loop2 /home/object-swift
#echo "losetup /dev/loop2 /home/object-swift" >> /etc/rc.local
以上可以在没有多余硬盘情况下虚
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs= 8 0 0" >> /etc/fstab
# mkdir -p /srv/node/sdb1
# mount /dev/sdb /srv/node/sdb1
# chown -R swift:swift /srv/node
3.    Create/etc/rsyncd.conf:
uid = swift
gid = swift
log file = /var/log/rsyncd.log 
pid file = /var/run/rsyncd.pid
address = 10.7.0.128 #  Replication Network
[account]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/account.lock
[container]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/container.lock
[object]
max connections = 8
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
Note
The rsync service requires no authentication, so run it on a local, private network.
  
4.    编辑/etc/xinetd.d/rsync
  disable = false
5.    Start the xinetd service:
  Service xinetd start  
6.    创建相关目录并赋予相关权限 
# mkdir -p /var/swift/recon
# chown -R swift:swift /var/swift/recon

五:配置安装proxy node
【swift_proxy01】 
1.    安装 swift-proxy 服务:
 # yum install openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token
2.    配置memeached 侦听端口
# cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.128.53"
3.    配置服务并添加开机启动项 
   # service memcached start 
# chkconfig memcached on
4.    编辑/etc/swift/proxy-server.conf:
[root@controller ~]# cat /etc/swift/proxy-server.conf 
[DEFAULT]
bind_port = 8080
workers = 8
user = swift

[pipeline:main]
pipeline = healthcheck cache authtoken keystone proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.128.53:11211,192.168.128.54,11211

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = admin, SwiftOperator
is_admin = true
cache = swift.cache

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = service
admin_user = swift
admin_password = password 
auth_host = controller 
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-swift
[root@controller swift]# cat object-expirer.conf 
[DEFAULT]

[object-expirer]
# auto_create_account_prefix = .

[pipeline:main]
pipeline = catch_errors cache proxy-server

[app:proxy-server]
use = egg:swift#proxy

[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.128.53:11211,192.168.128.54:11211
[root@compute swift]# cat container-server.conf 
[DEFAULT]
#bind_ip = 127.0.0.1
bind_port = 6001
workers = 2

[pipeline:main]
pipeline = container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]
[root@compute swift]# cat container-server.conf 
[DEFAULT]
#bind_ip = 127.0.0.1
bind_port = 6001
workers = 2

[pipeline:main]
pipeline = container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]
5.    创建accont、container、object rings,并且为进入每一个节点设备rings (简单的执行文件)
[root@controller swift]# cat test01.sh 
cd /etc/swift
swift-ring-builder account.builder create 18 3 1  
swift-ring-builder container.builder create 18 3 1  
swift-ring-builder object.builder create 18 3 1  
#### swift-storage01
swift-ring-builder account.builder add z1-10.6.0.126:6002/sdb1 100   
swift-ring-builder container.builder add z1-10.6.0.126:6001/sdb1 100   
swift-ring-builder object.builder add z1-10.6.0.126:6000/sdb1 100   
####swift-storage02
swift-ring-builder account.builder add z2-10.6.0.127:6002/sdb1 100   
swift-ring-builder container.builder add z2-10.6.0.127:6001/sdb1 100   
swift-ring-builder object.builder add z2-10.6.0.127:6000/sdb1 100   
##swift-storage03
swift-ring-builder account.builder add z1-10.6.0.128:6002/sdb1 100   
swift-ring-builder container.builder add z1-10.6.0.128:6001/sdb1 100   
swift-ring-builder object.builder add z1-10.6.0.128:6000/sdb1 100

swift-ring-builder account.builder      
swift-ring-builder container.builder      
swift-ring-builder object.builder      
swift-ring-builder account.builder rebalance     
swift-ring-builder container.builder rebalance     
swift-ring-builder object.builder rebalance   
6.    拷贝account.ring.gz、container.ring.gz、object.ring.gz到每一个storage node 与 Swift_proxy02.
7.    Start the Proxy service and configure it to start when the system boots:
#  service openstack-swift-proxy start 
# chkconfig openstack-swift-proxy on
【swift_proxy02】 
8.    安装 swift-proxy 服务:
 # yum install openstack-swift-proxy memcached python-swiftclient python-keystone-auth-token
9.    配置memeached 侦听端口
# cat /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS="-l 192.168.128.54"
10.    配置服务并添加开机启动项 
   # service memcached start 
# chkconfig memcached on
11.    编辑/etc/swift/proxy-server.conf:
[root@controller ~]# cat /etc/swift/proxy-server.conf 
[DEFAULT]
bind_port = 8080
workers = 8
user = swift

[pipeline:main]
pipeline = healthcheck cache authtoken keystone proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.128.53:11211,192.168.128.54,11211

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = admin, SwiftOperator
is_admin = true
cache = swift.cache

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = service
admin_user = swift
admin_password = password 
auth_host = controller 
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-swift
[root@controller swift]# cat object-expirer.conf 
[DEFAULT]

[object-expirer]
# auto_create_account_prefix = .

[pipeline:main]
pipeline = catch_errors cache proxy-server

[app:proxy-server]
use = egg:swift#proxy

[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.128.53:11211,192.168.128.54:11211
[root@compute swift]# cat container-server.conf 
[DEFAULT]
#bind_ip = 127.0.0.1
bind_port = 6001
workers = 2

[pipeline:main]
pipeline = container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]
[root@compute swift]# cat container-server.conf 
[DEFAULT]
#bind_ip = 127.0.0.1
bind_port = 6001
workers = 2

[pipeline:main]
pipeline = container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]
12.    Start the Proxy service and configure it to start when the system boots:
#  service openstack-swift-proxy start 
# chkconfig openstack-swift-proxy on

六:在storage node 1、storage node 2启动服务
 创建一个简单的脚本用于启动全部服务,并且添加到开机服务启动项
[root@compute ~]# cat restart_swift.sh 
#!/bin/bash
cd /etc/init.d
if [ "$#" == "1" ];then
  Act=$1
  for service in openstack-swift*
  do
   service $service $Act 
   chkconfig $service on
  done
else
  echo "Usage: $0 {start|stop|restart|status}"
fi

Note
To start all swift services at once, run the command:
# swift-init all start
七:安装 HA node 

【haproxy01 node】
1.    安装haproxy,keepalived软件
   #yum –y install keepalived  haproxy
2.    配置haproxy软件
 # cd /etc/haproxy/
=======================================
#我自己的配置
[root@swift_proxy02 haproxy]# cat haproxy.cfg
global           #全局设置
       log 127.0.0.1   local0      #日志输出配置,所有日志都记录在本机,通过local0输出
       #log loghost    local0 info
       maxconn 4096             #最大连接数
       uid haproxy                   #所属运行的用户uid
       gid haproxy                   #所属运行的用户组
       daemon                   #以后台形式运行haproxy
       nbproc 2                 #启动2个haproxy实例
       pidfile /var/run/haproxy.pid  #将所有进程写入pid文件
       #debug
       #quiet
 
defaults             #默认设置
       #log    global
       log     127.0.0.1       local3         #日志文件的输出定向
       mode    http         #所处理的类别,默认采用http模式,可配置成tcp作4层消息转发
       option  httplog       #日志类别,采用httplog
       option  dontlognull 
       option  forwardfor   #如果后端服务器需要获得客户端真实ip需要配置的参数,可以从Http Header中获得客户端ip
       option  httpclose    #每次请求完毕后主动关闭http通道,haproxy不支持keep-alive,只能模拟这种模式的实现
       retries 3           #3次连接失败就认为服务器不可用,主要通过后面的check检查
       option  redispatch   #当serverid对应的服务器挂掉后,强制定向到其他健康服务器
       maxconn 2000                     #最大连接数
       contimeout      5000            #连接超时时间
       clitimeout      50000           #客户端连接超时时间
       srvtimeout      50000           #服务器端连接超时时间
 
 
 
frontend http-in                        #前台
       bind 192.168.128.55:8080
       mode    http
       option  httplog
       log     global
       default_backend swift_proxy       #静态服务器池
 
backend swift_proxy                    #后台
       balance roundrobin#负载均衡算法
       server  swift_porxy01 192.168.128.50:8090 check inter 2000 rise 3 fall 5
       server  swift_porxy02 192.168.128.51:8090 check inter 2000 rise 3 fall 5
 
listen admin_stats
   bind 192.168.128.55:1080
   mode http
   log 127.0.0.1 local2 err
   stats refresh 30s
   stats uri /admin?stats
   stats auth admin:admin
 
#======================================
#同事配置
# cat haproxy.cfg
  # This file managed by Puppet
global
  chroot  /var/lib/haproxy
  daemon 
  group  haproxy
  log  192.168.128.55 local0
  maxconn  4000
  pidfile  /var/run/haproxy.pid
  stats  socket /var/lib/haproxy/stats
  user  haproxy

defaults
  log  global
  maxconn  8000
  option  redispatch
  retries  3
stats  enable
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

listen swift_proxy
  bind 192.168.128.55:8080
  #balance  source
  #option  tcpka
  #option  httpchk
  #option  tcplog
  mode    http
  stats   enable
#  stats   auth username:password
  balance roundrobin
  option  httpchk HEAD /healthcheck HTTP/1.0
  option  forwardfor
  option  httpclose
  server swiftproxy01 192.168.128.51:8080 check inter 2000 rise 2 fall 5
  server swiftproxy02 192.168.128.52:8080 check inter 2000 rise 2 fall 5
listen admin_stats
   bind 192.168.128.55:1080
   mode http
   log 127.0.0.1 local2 err
   stats refresh 30s
   stats uri /admin?stats
   stats auth admin:admin

3.    配置keepalived
  # cd /etc/keepalived/
# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id swift-proxy
}


vrrp_instance VI_1 {
    state MASTER      
    interface eth0
    virtual_router_id 51
    priority 200     
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    track_script {
        chk_http_port  
    }
    virtual_ipaddress {
        192.168.128.55
    }
}

4.    启动keepalived ,并加入到开机启动
#service keepalived start
#service haproxy start
#chkconfig keepalived on
5.    测试haproxy
http://192.168.128.55:1080/admin?stats
#虚拟ip查看
ip add show eth0
 
【haproxy02 node】
1.    安装haproxy,keepalived软件
   #yum –y install keepalived  haproxy
2.    配置haproxy软件
 #mkdir –p /etc/haproxy
=======================================
#我自己的配置
[root@swift_proxy02 haproxy]# cat haproxy.cfg
global           #全局设置
       log 127.0.0.1   local0      #日志输出配置,所有日志都记录在本机,通过local0输出
       #log loghost    local0 info
       maxconn 4096             #最大连接数
       uid haproxy                   #所属运行的用户uid
       gid haproxy                   #所属运行的用户组
       daemon                   #以后台形式运行haproxy
       nbproc 2                 #启动2个haproxy实例
       pidfile /var/run/haproxy.pid  #将所有进程写入pid文件
       #debug
       #quiet
 
defaults             #默认设置
       #log    global
       log     127.0.0.1       local3         #日志文件的输出定向
       mode    http         #所处理的类别,默认采用http模式,可配置成tcp作4层消息转发
       option  httplog       #日志类别,采用httplog
       option  dontlognull 
       option  forwardfor   #如果后端服务器需要获得客户端真实ip需要配置的参数,可以从Http Header中获得客户端ip
       option  httpclose    #每次请求完毕后主动关闭http通道,haproxy不支持keep-alive,只能模拟这种模式的实现
       retries 3           #3次连接失败就认为服务器不可用,主要通过后面的check检查
       option  redispatch   #当serverid对应的服务器挂掉后,强制定向到其他健康服务器
       maxconn 2000                     #最大连接数
       contimeout      5000            #连接超时时间
       clitimeout      50000           #客户端连接超时时间
       srvtimeout      50000           #服务器端连接超时时间
 
 
 
frontend http-in                        #前台
       bind 192.168.128.55:8080
       mode    http
       option  httplog
       log     global
       default_backend swift_proxy       #静态服务器池
 
backend swift_proxy                    #后台
       balance roundrobin#负载均衡算法
       server  swift_porxy01 192.168.128.50:8090 check inter 2000 rise 3 fall 5
       server  swift_porxy02 192.168.128.51:8090 check inter 2000 rise 3 fall 5
 
listen admin_stats
   bind 192.168.128.55:1080
   mode http
   log 127.0.0.1 local2 err
   stats refresh 30s
   stats uri /admin?stats
   stats auth admin:admin
 
#======================================
#同事配置
# cat haproxy.cfg
  # This file managed by Puppet
global
  chroot  /var/lib/haproxy
  daemon 
  group  haproxy
  log  192.168.128.55 local0
  maxconn  4000
  pidfile  /var/run/haproxy.pid
  stats  socket /var/lib/haproxy/stats
  user  haproxy

defaults
  log  global
  maxconn  8000
  option  redispatch
  retries  3
stats  enable
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

listen swift_proxy
  bind 192.168.128.55:8080
  #balance  source
  #option  tcpka
  #option  httpchk
  #option  tcplog
  mode    http
  stats   enable
#  stats   auth username:password
  balance roundrobin
  option  httpchk HEAD /healthcheck HTTP/1.0
  option  forwardfor
  option  httpclose
  server swiftproxy01 192.168.128.51:8080 check inter 2000 rise 2 fall 5
  server swiftproxy02 192.168.128.52:8080 check inter 2000 rise 2 fall 5
listen admin_stats
   bind 192.168.128.55:1080
   mode http
   log 127.0.0.1 local2 err
   stats refresh 30s
   stats uri /admin?stats
   stats auth admin:admin

 
3.    配置keepalived
  # cd /etc/keepalived/
# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id swift-proxy
}

vrrp_instance VI_1 {
    state BACKUP      
    interface eth0
    virtual_router_id 51
    priority 180     
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    track_script {
        chk_http_port  
    }
    virtual_ipaddress {
        192.168.128.55
    }
}

4.    启动keepalived ,并加入到开机启动
#service keepalived start
#service haproxy start
#chkconfig keepalived on
      测试haproxy
http://192.168.128.55:1080/admin?stats

#虚拟ip查看
ip add show eth0

4.    启动keepalived ,并加入到开机启动
#service keepalived start
#service haproxy start
#chkconfig keepalived on
      测试haproxy
http://192.168.128.55:1080/admin?stats
 

八:验证存储服务
安装Swift client工具
#source swiftrc
# swift stat
Account: AUTH_95d2477adc90453ea13d0b7d3571acaf
Containers: 3
Objects: 4
Bytes: 28889
Accept-Ranges: bytes
X-Timestamp: 1400134126.35649
 X-Trans-Id: tx9cb1d3f2cc224cfc9344a-0053758b67
 Content-Type: text/plain; charset=utf-8
#上传文件
#touch test1.txt
#swift upload swift test1.txt
下载文件
#swift download swift
 
=================

swift -U admin:admin -K password -A  http://192.168.128.59:35357/v2.0 -V 2.0 list 

swift -U admin:admin -K password -A  http://192.168.128.59:35357/v2.0 -V 2.0 upload test test.txt

swift -U admin:admin -K password -A  http://192.168.128.59:35357/v2.0 -V 2.0 list test --lh

swift -U admin:admin -K password -A  http://192.168.128.59:35357/v2.0 -V 2.0 download test 









 


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Docker+Consul+Nginx+keepalived是一种常用的云原生架构方案,它结合了容器化、服务发现、负载均衡和高可用等多种技术手段,可用于构建高可用、弹性、可扩展的应用系统。 该方案的主要组件包括: 1. Docker:用于容器化应用程序和服务,提供了高效、轻量、可移植的应用打包和部署方式。 2. Consul:用于服务发现和配置管理,支持多数据中心、跨平台、高度可扩展的分布式系统。 3. Nginx:用于负载均衡和反向代理,支持高并发、高可用的流量分发。 4. keepalived:用于实现高可用的服务和节点,提供了基于 VRRP 协议的故障转移和自动切换功能。 在该方案中,Docker 容器作为应用程序和服务的运行环境,使用 Consul 进行服务注册和发现,并通过 Nginx 进行流量分发和负载均衡。同时,使用 keepalived 实现高可用的服务和节点,确保系统的稳定性和可用性。 项目描述可以按照以下步骤进行撰写: 1. 项目背景和目的:简要介绍本项目的背景和目的,说明为什么选择 Docker+Consul+Nginx+keepalived 方案。 2. 技术架构:详细介绍该方案的技术架构和组件,包括 Docker、Consul、Nginx 和 keepalived 的作用和使用方式。 3. 系统功能:描述系统的主要功能和特点,包括服务发现、负载均衡、高可用等方面。 4. 实现方式:介绍系统的具体实现方式和实现步骤,包括 Docker 镜像的构建、应用程序的容器化、Consul 的配置和使用、Nginx 的配置和使用、keepalived 的配置和使用等。 5. 测试和验证:对系统进行测试和验证,验证系统的功能和性能是否符合预期,是否满足高可用和弹性的要求。 6. 总结和展望:对本项目进行总结和展望,分析该方案的优缺点和适用范围,展望未来的发展方向和趋势。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值