ceph-deploy源码分析(二)——new模块 <转>

ceph-deploy源码分析(二)——new模块

原文:http://www.hl10502.com/2017/06/19/ceph-deploy-new/#more

ceph-deploy的new.py模块是用来开始部署新集群,创建ceph.conf、ceph.mon.keyring文件。

 

new 子命令格式如下

ceph-deploy new [-h] [--no-ssh-copykey] [--fsid FSID]
                       [--cluster-network CLUSTER_NETWORK]
                       [--public-network PUBLIC_NETWORK]
                       MON [MON ...]

 

部署集群

make函数priority为10,子命令设置的默认函数为new函数。

@priority(10)
def make(parser):
    """
    Start deploying a new cluster, and write a CLUSTER.conf and keyring for it.
    """
    parser.add_argument(
        'mon',
        metavar='MON',
        nargs='+',
        help='initial monitor hostname, fqdn, or hostname:fqdn pair',
        type=arg_validators.Hostname(),
        )
    parser.add_argument(
        '--no-ssh-copykey',
        dest='ssh_copykey',
        action='store_false',
        default=True,
        help='do not attempt to copy SSH keys',
    )
    parser.add_argument(
        '--fsid',
        dest='fsid',
        help='provide an alternate FSID for ceph.conf generation',
    )
    parser.add_argument(
        '--cluster-network',
        help='specify the (internal) cluster network',
        type=arg_validators.Subnet(),
    )
    parser.add_argument(
        '--public-network',
        help='specify the public network for a cluster',
        type=arg_validators.Subnet(),
    )
    parser.set_defaults(
        func=new,
        )

 

部署新集群

new 函数开始部署新集群

  • 创建ceph.conf文件,写入[global]fsid、mon_initial_members、mon_host、auth_cluster_required、auth_service_required、auth_client_required,如果参数中有public_network、cluster_network写入配置文件
  • 调用new_mon_keyring函数创建ceph.mon.keyring文件
  • def new(args):
        if args.ceph_conf:
            raise RuntimeError('will not create a Ceph conf file if attemtping to re-use with `--ceph-conf` flag')
        LOG.debug('Creating new cluster named %s', args.cluster)
        # 生成配置
        cfg = conf.ceph.CephConf()
        cfg.add_section('global')
        # 获取参数中的额fsid或者自动生成
        fsid = args.fsid or uuid.uuid4()
        cfg.set('global', 'fsid', str(fsid))
        # if networks were passed in, lets set them in the
        # global section
        if args.public_network:
            cfg.set('global', 'public network', str(args.public_network))
        if args.cluster_network:
            cfg.set('global', 'cluster network', str(args.cluster_network))
        # mon节点
        mon_initial_members = []
        # mon主机
        mon_host = []
        # 循环host
        for (name, host) in mon_hosts(args.mon):
            # Try to ensure we can ssh in properly before anything else
            # ssh key copy
            if args.ssh_copykey:
                ssh_copy_keys(host, args.username)
            # Now get the non-local IPs from the remote node
            # 连接远程主机
            distro = hosts.get(host, username=args.username)
            # 获取主机的IP地址
            remote_ips = net.ip_addresses(distro.conn)
            # custom cluster names on sysvinit hosts won't work
            if distro.init == 'sysvinit' and args.cluster != 'ceph':
                LOG.error('custom cluster names are not supported on sysvinit hosts')
                raise exc.ClusterNameError(
                    'host %s does not support custom cluster names' % host
                )
            distro.conn.exit()
            # Validate subnets if we received any
            if args.public_network or args.cluster_network:
                # 校验IP地址
                validate_host_ip(remote_ips, [args.public_network, args.cluster_network])
            # Pick the IP that matches the public cluster (if we were told to do
            # so) otherwise pick the first, non-local IP
            LOG.debug('Resolving host %s', host)
            if args.public_network:
                ip = get_public_network_ip(remote_ips, args.public_network)
            else:
                ip = net.get_nonlocal_ip(host)
            LOG.debug('Monitor %s at %s', name, ip)
            mon_initial_members.append(name)
            try:
                socket.inet_pton(socket.AF_INET6, ip)
                mon_host.append("[" + ip + "]")
                LOG.info('Monitors are IPv6, binding Messenger traffic on IPv6')
                cfg.set('global', 'ms bind ipv6', 'true')
            except socket.error:
                mon_host.append(ip)
        LOG.debug('Monitor initial members are %s', mon_initial_members)
        LOG.debug('Monitor addrs are %s', mon_host)
        # mon_initial_members 有多个的话,中间用空格隔开
        cfg.set('global', 'mon initial members', ', '.join(mon_initial_members))
        # no spaces here, see http://tracker.newdream.net/issues/3145
        # mon_host 有多个的话,中间没有空格
        cfg.set('global', 'mon host', ','.join(mon_host))
        # override undesirable defaults, needed until bobtail
        # http://tracker.ceph.com/issues/6788
        cfg.set('global', 'auth cluster required', 'cephx')
        cfg.set('global', 'auth service required', 'cephx')
        cfg.set('global', 'auth client required', 'cephx')
        path = '{name}.conf'.format(
            name=args.cluster,
            )
        # 创建mon keyring
        new_mon_keyring(args)
        LOG.debug('Writing initial config to %s...', path)
        tmp = '%s.tmp' % path
        with open(tmp, 'w') as f:
            # 保存ceph配置文件
            cfg.write(f)
        try:
            os.rename(tmp, path)
        except OSError as e:
            if e.errno == errno.EEXIST:
                raise exc.ClusterExistsError(path)
            else:
                raise

     

注意:
mon_initial_members 有多个的话,中间用空格隔开
mon_host 有多个的话,中间没有空格

创建ceph.mon.keyring文件

new_mon_keyring函数创建ceph.mon.keyring文件

def new_mon_keyring(args):
    LOG.debug('Creating a random mon key...')
    mon_keyring = '[mon.]\nkey = %s\ncaps mon = allow *\n' % generate_auth_key()
    keypath = '{name}.mon.keyring'.format(
        name=args.cluster,
        )
    oldmask = os.umask(0o77)
    LOG.debug('Writing monitor keyring to %s...', keypath)
    try:
        tmp = '%s.tmp' % keypath
        with open(tmp, 'w', 0o600) as f:
            f.write(mon_keyring)
        try:
            os.rename(tmp, keypath)
        except OSError as e:
            if e.errno == errno.EEXIST:
                raise exc.ClusterExistsError(keypath)
            else:
                raise
    finally:
        os.umask(oldmask)

 

手工部署集群

以ceph-deploy部署集群:ceph-deploy new ceph-231为例,对应的手工操作。

获取ip地址

执行以下命令,通过正则表达式获取IP地址192.168.217.231

[root@ceph-231 ceph-cluster]# /usr/sbin/ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP mode DEFAULT qlen 1000
    link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT
    link/ether 86:f4:14:e3:1b:b2 brd ff:ff:ff:ff:ff:ff
4: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT
    link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff

 

[root@ceph-231 ceph-cluster]# /usr/sbin/ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
    link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
    link/ether 86:f4:14:e3:1b:b2 brd ff:ff:ff:ff:ff:ff
4: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff
    inet 192.168.217.231/24 brd 192.168.217.255 scope global xenbr0
       valid_lft forever preferred_lft forever

 

创建ceph.conf

[root@ceph-231 ceph-cluster]# vi ceph.conf
[global]
fsid = a3b9b0aa-01ab-4e1b-bba3-6f5317b0795b
mon_initial_members = ceph-231
mon_host = 192.168.217.231
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 192.168.217.231


创建ceph.mon.keyring

可以通过ceph-authtool命令生成

[root@ceph-231 ceph-cluster]# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /tmp/ceph.mon.keyring
[root@ceph-231 ~]# cat /tmp/ceph.mon.keyring
[mon.]
        key = AQCzxEhZC7tICxAAuHK5GipD96enMuhv82CCLg==
        caps mon = "allow *"

将/tmp/ceph.mon.keyring内容复制到ceph.mon.keyring

[root@ceph-231 ceph-cluster]# vi ceph.mon.keyring
[mon.]
key = AQCzxEhZC7tICxAAuHK5GipD96enMuhv82CCLg==
caps mon = allow 

 

转载于:https://my.oschina.net/banwh/blog/1519173

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
ceph-deploy是一个用于部署Ceph集群的工具。它可以简化Ceph集群的安装和配置过程。通过使用ceph-deploy,您可以在目标节点上安装Ceph软件包并设置监视器,管理器和对象存储设备(OSD)。以下是使用ceph-deploy进行Ceph集群部署的一些步骤和命令: 1. 首先,在您选择的目录下执行以下命令: ceph-deploy install ceph0 ceph1 ceph2 这将安装Ceph软件包到名为ceph0、ceph1和ceph2的目标节点。 2. 接下来,在一个合适的位置创建一个文件夹来保存ceph-deploy工具生成的配置文件和日志文件。例如: mkdir /root/ceph-deploy 进入该目录: cd /root/ceph-deploy 3. 在这个目录中执行以下命令: ceph-deploy new ceph0 ceph1 ceph2 这将在目标节点上生成Ceph配置文件,例如ceph.conf。 4. 在完成上述步骤后,执行以下命令以创建初始监视器: ceph-deploy mon create-initial 这将在目标节点上创建Ceph监视器。 5. 您还可以设置管理器(mgr)和对象存储设备(OSD)。执行以下命令创建管理器: ceph-deploy mgr create ceph0 ceph1 ceph2 这将在目标节点上创建Ceph管理器。 完成上述步骤后,您可以使用ceph-deploy工具来管理和维护您的Ceph集群。请注意,ceph-deploy工具还提供了其他命令和选项,以满足特定需求和配置。 引用指向的是在目录下执行安装和设置监视器的命令。 引用指向的是在指定目录中生成Ceph配置文件的命令。 引用指向的是创建管理器的命令。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值