ceph-deploy 2.0.2 documentation

CONTENT INDEX

CEPH-DEPLOY – DEPLOY CEPH WITH MINIMAL INFRASTRUCTURE

ceph-deploy is a way to deploy Ceph relying on just SSH access to the servers, sudo, and some Python. It runs fully on your workstation, requiring no servers, databases, or anything like that.

If you set up and tear down Ceph clusters a lot, and want minimal extra bureaucracy, this is for you.

WHAT THIS TOOL IS NOT

It is not a generic deployment system, it is only for Ceph, and is designed for users who want to quickly get Ceph running with sensible initial settings without the overhead of installing Chef, Puppet or Juju.

It does not handle client configuration beyond pushing the Ceph config file and users who want fine-control over security settings, partitions or directory locations should use a tool such as Chef or Puppet.

INSTALLATION

Depending on what type of usage you are going to have with ceph-deploy you might want to look into the different ways to install it. For automation, you might want to bootstrap directly. Regular users of ceph-deploy would probably install from the OS packages or from the Python Package Index.

PYTHON PACKAGE INDEX

If you are familiar with Python install tools (like pip and easy_install) you can easily install ceph-deploy like:

pip install ceph-deploy

It should grab all the dependencies for you and install into the current user’s environment.

We highly recommend using virtualenv and installing dependencies in a contained way.

DEB

All new releases of ceph-deploy are pushed to all ceph DEB release repos.

The DEB release repos are found at:

http://ceph.com/debian-{release}
http://ceph.com/debian-testing

This means, for example, that installing ceph-deploy from http://ceph.com/debian-giant will install the same version as from http://ceph.com/debian-firefly or http://ceph.com/debian-testing.

RPM

All new releases of ceph-deploy are pushed to all ceph RPM release repos.

The RPM release repos are found at:

http://ceph.com/rpm-{release}
http://ceph.com/rpm-testing

Make sure you add the proper one for your distribution (i.e. el7 vs rhel7).

This means, for example, that installing ceph-deploy from http://ceph.com/rpm-giant will install the same version as from http://ceph.com/rpm-firefly or http://ceph.com/rpm-testing.

BOOTSTRAPPING

To get the source tree ready for use, run this once:

./bootstrap

You can symlink the ceph-deploy script in this somewhere convenient (like ~/bin), or add the current directory to PATH, or just always type the full path to ceph-deploy.

SSH AND REMOTE CONNECTIONS

ceph-deploy will attempt to connect via SSH to hosts when the hostnames do not match the current host’s hostname. For example, if you are connecting to host node1 it will attempt an SSH connection as long as the current host’s hostname is not node1.

ceph-deploy at a minimum requires that the machine from which the script is being run can ssh as root without password into each Ceph node.

To enable this generate a new ssh keypair for the root user with no passphrase and place the public key (id_rsa.pub or id_dsa.pub) in:

/root/.ssh/authorized_keys

and ensure that the following lines are in the sshd config:

PermitRootLogin yes
PermitEmptyPasswords yes

The machine running ceph-deploy does not need to have the Ceph packages installed unless it needs to admin the cluster directly using the ceph command line tool.

USERNAMES

When not specified the connection will be done with the same username as the one executing ceph-deploy. This is useful if the same username is shared in all the nodes but can be cumbersome if that is not the case.

A way to avoid this is to define the correct usernames to connect with in the SSH config, but you can also use the --username flag as well:

ceph-deploy --username ceph install node1

ceph-deploy then in turn would use ceph@node1 to connect to that host.

This would be the same expectation for any action that warrants a connection to a remote host.

MANAGING AN EXISTING CLUSTER

You can use ceph-deploy to provision nodes for an existing cluster. To grab a copy of the cluster configuration file (normally ceph.conf):

ceph-deploy config pull HOST

You will usually also want to gather the encryption keys used for that cluster:

ceph-deploy gatherkeys MONHOST

At this point you can skip the steps below that create a new cluster (you already have one) and optionally skip installation and/or monitor creation, depending on what you are trying to accomplish.

INSTALLING PACKAGES

For detailed information on installation instructions refer to the install section.

PROXY OR FIREWALL INSTALLS

If attempting to install behind a firewall or through a proxy you can use the --no-adjust-repos that will tell ceph-deploy to skip any changes to the distro’s repository in order to install the packages and it will go straight to package installation.

That will allow an environment without internet access to point to its own repositories. This means that those repositories will need to be properly setup (and mirrored with all the necessary dependencies) before attempting an install.

Another alternative is to set the wget env variables to point to the right hosts, for example:

http_proxy=http://host:port
ftp_proxy=http://host:port
https_proxy=http://host:port

CREATING A NEW CONFIGURATION

To create a new configuration file and secret key, decide what hosts will run ceph-mon, and run:

ceph-deploy new MON [MON..]

For detailed information on new instructions refer to the new section.

For detailed information on new subcommand refer to the mon section.

DEPLOYING MONITORS

To actually deploy ceph-mon to the hosts you chose, run:

ceph-deploy mon create HOST [HOST..]

Without explicit hosts listed, hosts in mon_initial_members in the config file are deployed. That is, the hosts you passed to ceph-deploy new are the default value here.

For detailed information on mon subcommand refer to the mon section.

GATHER KEYS

To gather authentication keys (for administering the cluster and bootstrapping new nodes) to the local directory, run:

ceph-deploy gatherkeys HOST [HOST...]

where HOST is one of the monitor hosts.

Once these keys are in the local directory, you can provision new OSDs etc.

For detailed information on gatherkeys subcommand refer to the gatherkeys section.

ADMIN HOSTS

To prepare a host with a ceph.conf and ceph.client.admin.keyring keyring so that it can administer the cluster, run:

ceph-deploy admin HOST [HOST ...]

Older versions of ceph-deploy automatically added the admin keyring to all mon nodes making them admin nodes. For detailed information on the admin command refer to the admin section.

For detailed information on admin subcommand refer to the admin section.

DEPLOYING OSDS

To create an OSD on a remote node, run:

ceph-deploy osd create HOST --data /path/to/device

Alternatively, --data can accept a logical volume in the format of vg/lv

After that, the hosts will be running OSDs for the given data disks or logical volumes. For other OSD devices like journals (when using --filestore) or block.db, and block.wal, these need to be logical volumes or GPT partitions.

Note

 

Partitions aren’t created by this tool, they must be created beforehand

FORGET KEYS

The new and gatherkeys put some Ceph authentication keys in keyrings in the local directory. If you are worried about them being there for security reasons, run:

ceph-deploy forgetkeys

and they will be removed. If you need them again later to deploy additional nodes, simply re-run:

ceph-deploy gatherkeys HOST [HOST...]

and they will be retrieved from an existing monitor node.

MULTIPLE CLUSTERS

All of the above commands take a --cluster=NAME option, allowing you to manage multiple clusters conveniently from one workstation. For example:

ceph-deploy --cluster=us-west new
vi us-west.conf
ceph-deploy --cluster=us-west mon

FAQ

BEFORE ANYTHING

Make sure you have the latest version of ceph-deploy. It is actively developed and releases are coming weekly (on average). The most recent versions of ceph-deploy will have a --version flag you can use, otherwise check with your package manager and update if there is anything new.

WHY IS FEATURE X NOT IMPLEMENTED?

Usually, features are added when/if it is sensible for someone that wants to get started with ceph and said feature would make sense in that context. If you believe this is the case and you’ve read “what this tool is not” and still think feature X should exist in ceph-deploy, open a feature request in the ceph tracker: http://tracker.ceph.com/projects/ceph-deploy/issues

A COMMAND GAVE ME AN ERROR, WHAT IS GOING ON?

Most of the commands for ceph-deploy are meant to be run remotely in a host that you have configured when creating the initial config. If a given command is not working as expected try to run the command that failed in the remote host and assert the behavior there.

If the behavior in the remote host is the same, then it is probably not something wrong with ceph-deploy per-se. Make sure you capture the output of both the ceph-deploy output and the output of the command in the remote host.

ISSUES WITH MONITORS

If your monitors are not starting, make sure that the {hostname} you used when you ran ceph-deploy mon create {hostname} match the actual hostname -s in the remote host.

Newer versions of ceph-deploy should warn you if the results are different but that might prevent the monitors from reaching quorum.

NEW

This subcommand is used to generate a working ceph.conf file that will contain important information for provisioning nodes and/or adding them to a cluster.

SSH KEYS

Ideally, all nodes will be pre-configured to have their passwordless access from the machine executing ceph-deploy but you can also take advantage of automatic detection of this when calling the new subcommand.

Once called, it will try to establish an SSH connection to the hosts passed into the new subcommand, and determine if it can (or cannot) connect without a password prompt.

If it can’t proceed, it will try to copy existing keys to the remote host, if those do not exist, then passwordless rsa keys will be generated for the current user and those will get used.

This feature can be overridden in the new subcommand like:

ceph-deploy new --no-ssh-copykey

New in version 1.3.2.

CREATING A NEW CONFIGURATION

To create a new configuration file and secret key, decide what hosts will run ceph-mon, and run:

ceph-deploy new MON [MON..]

listing the hostnames of the monitors. Each MON can be

  • a simple hostname. It must be DNS resolvable without the fully qualified domain name.
  • a fully qualified domain name. The hostname is assumed to be the leading component up to the first ..
  • a HOST:FQDN pair, of both the hostname and a fully qualified domain name or IP address. For example, foo, foo.example.com, foo:something.example.com, and foo:1.2.3.4 are all valid. Note, however, that the hostname should match that configured on the host foo.

The above will create a ceph.conf and ceph.mon.keyring in your current directory.

EDIT INITIAL CLUSTER CONFIGURATION

You want to review the generated ceph.conf file and make sure that the mon_host setting contains the IP addresses you would like the monitors to bind to. These are the IPs that clients will initially contact to authenticate to the cluster, and they need to be reachable both by external client-facing hosts and internal cluster daemons.

–CLUSTER-NETWORK –PUBLIC-NETWORK

Are used to provide subnets so that nodes can communicate within that network. If passed, validation will occur by looking at the remote IP addresses and making sure that at least one of those addresses is valid for the given subnet.

Those values will also be added to the generated ceph.conf. If IPs are not correct (or not in the subnets specified) an error will be raised.

New in version 1.5.13.

INSTALL

A few different distributions are supported with some flags to allow some customization for installing ceph on remote nodes.

Supported distributions:

  • Ubuntu
  • Debian
  • Fedora
  • RedHat
  • CentOS
  • Suse
  • Scientific Linux
  • Arch Linux

Before any action is taken, a platform detection call is done to make sure that the platform that will get ceph installed is the correct one. If the platform is not supported no further actions will proceed and an error message will be displayed, similar to:

[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: Mandriva

DISTRIBUTION NOTES

RPMS

On RPM-based distributions, yum-plugin-priorities is installed to make sure that upstream ceph.com repos have a higher priority than distro repos.

Because of packaging splits that are present in downstream repos that may not be present in ceph.com repos, ceph-deploy enables the check_obsoletes flag for the Yum priorities plugin.

Changed in version 1.5.22: Enable check_obsoletes by default

RHEL

When installing packages on systems running Red Hat Enterprise Linux (RHEL), ceph-deploy will not install the latest upstream release by default. On other distros, running ceph-deploy install without the --release flag will install the latest upstream release by default (i.e. firefly, giant, etc). On RHEL, the --release flag must be used if you wish to use the upstream packages hosted on http://ceph.com.

Changed in version 1.5.22: Require --release flag to get upstream packages on RHEL

SPECIFIC RELEASES

By default the latest release is assumed. This value changes when newer versions are available. If you are automating deployments it is better to specify exactly what release you need:

ceph-deploy install --release emperor {host}

Note that the --stable flag for specifying a Ceph release is deprecated and should no longer be used starting from version 1.3.6.

New in version 1.4.0.

UNSTABLE RELEASES

If you need to test cutting edge releases or a specific feature of ceph that has yet to make it to a stable release you can specify this as well with ceph-deploy with a couple of flags.

To get the latest development release:

ceph-deploy install --testing {host}

For a far more granular approach, you may want to specify a branch or a tag from the repository, if none specified it fall backs to the latest commit in master:

ceph-deploy install --dev {branch or tag} {host}

BEHIND FIREWALL

For restrictive environments there are a couple of options to be able to install ceph.

If hosts have had some customizations with custom repositories and all is needed is to proceed with a install of ceph, we can skip altering the source repositories like:

ceph-deploy install --no-adjust-repos {host}

Note that you will need to have working repositories that have all the dependencies that ceph needs. In some distributions, other repos (besides the ceph repos) will be added, like EPEL for CentOS.

However, if there is a ceph repo mirror already set up you can point to it before installation proceeds. For this specific action you will need two arguments passed in (or optionally use environment variables).

The repository URL and the GPG URL can be specified like this:

ceph-deploy install --repo-url {http mirror} --gpg-url {http gpg url} {host}

Optionally, you can use the following environment variables:

  • CEPH_DEPLOY_REPO_URL
  • CEPH_DEPLOY_GPG_URL

Those values will be used to write to the ceph sources.list (in Debian and Debian-based distros) or the yum.repos file for RPM distros and will skip trying to compose the right URL for the release being installed.

Note

 

It is currently not possible to specify what version/release is to be installed when --repo-url is used.

It is strongly suggested that both flags be provided. However, the --gpg-url will default to the current one in the ceph repository:

https://download.ceph.com/keys/release.asc

New in version 1.3.3.

LOCAL MIRRORS

ceph-deploy supports local mirror installation by syncing a repository to remote servers and configuring correctly the remote hosts to install directly from those local paths (as opposed to going through the network).

The one requirement for this option to work is to have a release.asc at the top of the directory that holds the repository files.

That file is used by Ceph as the key for its signed packages and it is usually retrieved from:

https://download.ceph.com/keys/release.asc

This is how it would look the process to get Ceph installed from a local repository in an admin host:

$ ceph-deploy install --local-mirror ~/tmp/rpm-mirror/ceph.com/rpm-emperor/el6 node2
[ceph_deploy.cli][INFO  ] Invoked (1.4.1): /bin/ceph-deploy install --local-mirror /Users/alfredo/tmp/rpm-mirror/ceph.com/rpm-emperor/el6 node2
[ceph_deploy.install][DEBUG ] Installing stable version emperor on cluster ceph hosts node2
[ceph_deploy.install][DEBUG ] Detecting platform for host node2 ...
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final
[node2][INFO  ] installing ceph on node2
[node2][INFO  ] syncing file: noarch/ceph-deploy-1.3-0.noarch.rpm
[node2][INFO  ] syncing file: noarch/ceph-deploy-1.3.1-0.noarch.rpm
[node2][INFO  ] syncing file: noarch/ceph-deploy-1.3.2-0.noarch.rpm
[node2][INFO  ] syncing file: noarch/ceph-release-1-0.el6.noarch.rpm
[node2][INFO  ] syncing file: noarch/index.html
[node2][INFO  ] syncing file: noarch/index.html?C=D;O=A
[node2][INFO  ] syncing file: noarch/index.html?C=D;O=D
[node2][INFO  ] syncing file: noarch/index.html?C=M;O=A
...
[node2][DEBUG ]
[node2][DEBUG ] Installed:
[node2][DEBUG ]   ceph.x86_64 0:0.72.1-0.el6
[node2][DEBUG ]
[node2][DEBUG ] Complete!
[node2][INFO  ] Running command: sudo ceph --version
[node2][DEBUG ] ceph version 0.72.1
(4d923861868f6a15dcb33fef7f50f674997322de)

New in version 1.5.0.

REPO FILE ONLY

The install command has a flag that offers flexibility for installing “repo files” only, avoiding installation of ceph and its dependencies.

These “repo files” are the configuration files for package managers (“yum” or “apt” for example) that point to the right repository information so that certain packages become available.

For APT these files would be list files and for YUM they would be repo files. Regardless of the package manager, ceph-deploy is able to install this file correctly so that the Ceph packages are available. This is useful in a situation where a massive upgrade is needed and ceph-deploy would be too slow to install sequentially in every host.

Repositories are specified in the cephdeploy.conf (or $HOME/.cephdeploy.conf) file. If a specific repository section is needed, it can be specified with the --release flag:

ceph-deploy install --repo --release firefly {HOSTS}

The above command would install the firefly repo file in every {HOST} specified.

If a repository section exists with the default = True flag, there is no need to specify anything else and the repo file can be installed simply by passing in the hosts:

ceph-deploy install --repo {HOSTS}

New in version 1.5.10.

MON

The mon subcommand provides an interface to interact with a cluster’s monitors. The tool makes a few assumptions that are needed to implement the most common scenarios. Monitors are usually very particular in what they need to work correctly.

Note

 

Before version v1.5.33 ceph-deploy relied upon ceph-create-keys. Using ceph-create-keys produced a side effect of deploying all bootstrap keys on the mon node so making all mon nodes admin nodes. This can be recreated by running the admin command on all mon nodes see admin section.

CREATE-INITIAL

Will deploy for monitors defined in mon initial members, wait until they form quorum and then gatherkeys, reporting the monitor status along the process. If monitors don’t form quorum the command will eventually time out.

This is the preferred way of initially deploying monitors since it will compound a few of the steps needed together while looking for possible issues along the way.

ceph-deploy mon create-initial

CREATE

Deploy monitors by specifying directly like:

ceph-deploy mon create node1 node2 node3

If no hosts are passed it will default to use the mon initial members defined in the configuration.

Please note that if this is an initial monitor deployment, the preferred way is to use create-initial.

ADD

Add a monitor to an existing cluster:

ceph-deploy mon add node1

Since monitor hosts can have different network interfaces, this command allows you to specify the interface IP in a few different ways.

``–address``: this will explicitly override any configured address for that host. Usage:

ceph-deploy mon add node1 --address 192.168.1.10

ceph.conf: If a section for the node that is being added exists and it defines a mon addr key. For example:

[mon.node1]
mon addr = 192.168.1.10

resolving/dns: if the monitor address is not defined in the configuration file nor overridden in the command-line it will fall-back to resolving the address of the provided host.

Warning

 

If the monitor host has multiple addresses you should specify the address directly to ensure the right IP is used. Please note, only one node can be added at a time.

New in version 1.4.0.

DESTROY

Completely remove monitors on a remote host. Requires hostname(s) as arguments:

ceph-deploy mon destroy node1 node2 node3

–KEYRINGS

Both create and create-initial subcommands can be used with the --keyrings flag that accepts a path to search for keyring files.

When this flag is used it will then look into the passed in path for files that end with .keyring and will proceed to concatenate them in memory and seed them to the monitor being created in the remote mode.

This is useful when having several different keyring files that are needed at initial setup, but normally, ceph-deploy will only use the $cluster.mon.keyring file for initial seeding.

To keep things in order, create a directory and use that directory to store all the keyring files that are needed. This is how the commands would look like for a directory called keyrings:

ceph-deploy mon --keyrings keyrings create-initial

Or for the create sub-command:

ceph-deploy mon --keyrings keyrings create {nodes}

RGW

The rgw subcommand provides an interface to interact with a cluster’s RADOS Gateway instances.

CREATE

Deploy RGW instances by specifying directly like:

ceph-deploy rgw create node1 node2 node3

This will create an instance of RGW on the given node(s) and start the corresponding service. The daemon will listen on the default port of 7480.

The RGW instances will default to having a name corresponding to the hostname where it runs. For example, rgw.node1.

If a custom name is desired for the RGW daemon, it can be specific like:

ceph-deploy rgw create node1:foo

Custom names are automatically prefixed with “rgw.”, so the resulting daemon name would be “rgw.foo”.

Note

 

If an error is presented about the bootstrap-rgw keyring not being found, that is because the bootstrap-rgw only been auto-created on new clusters starting with the Hammer release.

New in version 1.5.23.

Note

 

Removing RGW instances is not yet supported

Note

 

Changing the port on which RGW will listen at deployment time is not yet supported.

MDS

The mds subcommand provides an interface to interact with a cluster’s CephFS Metadata servers.

CREATE

Deploy MDS instances by specifying directly like:

ceph-deploy mds create node1 node2 node3

This will create an MDS on the given node(s) and start the corresponding service.

The MDS instances will default to having a name corresponding to the hostname where it runs. For example, mds.node1.

Note

 

Removing MDS instances is not yet supported

CEPH DEPLOY CONFIGURATION

Starting with version 1.4, ceph-deploy uses a configuration file that can be one of:

  • cephdeploy.conf (in the current directory)
  • $HOME/.cephdeploy.conf (hidden in the user’s home directory)

This configuration file allows for setting certain ceph-deploy behavior that would be difficult to set on the command line or that it might be cumbersome to do.

The file itself follows the INI style of configurations which means that it consists of sections (in brackets) that may contain any number of key/value pairs.

If a configuration file is not found in the current working directory nor in the user’s home dir, ceph-deploy will proceed to create one in the home directory.

This is how a default configuration file would look like:

#
# ceph-deploy configuration file
#
 
[ceph-deploy-global]
# Overrides for some of ceph-deploy's global flags, like verbosity or cluster
# name
 
[ceph-deploy-install]
# Overrides for some of ceph-deploy's install flags, like version of ceph to
# install
 
 
#
# Repositories section
#
 
# yum repos:
# [myrepo]
# baseurl = https://user:pass@example.org/rhel6
# gpgurl = https://example.org/keys/release.asc
# default = True
# extra-repos = cephrepo  # will install the cephrepo file too
#
# [cephrepo]
# name=ceph repo noarch packages
# baseurl=http://ceph.com/rpm-emperor/el6/noarch
# enabled=1
# gpgcheck=1
# type=rpm-md
# gpgkey=https://download.ceph.com/keys/release.asc
 
# apt repos:
# [myrepo]
# baseurl = https://user:pass@example.org/
# gpgurl = https://example.org/keys/release.asc
# default = True
# extra-repos = cephrepo  # will install the cephrepo file too
#
# [cephrepo]
# baseurl=http://ceph.com/rpm-emperor/el6/noarch
# gpgkey=https://download.ceph.com/keys/release.asc

SECTIONS

To work with ceph-deploy configurations, it is important to note that all sections that relate to ceph-deploy’s flags and state are prefixed with ceph-deploy- followed by the subcommand or by global if it is something that belongs to the global flags.

Any other section that is not prefixed with ceph-deploy- is considered a repository.

Repositories can be very complex to describe and most of the time (specially for yum repositories) they can be very verbose too.

SETTING DEFAULT FLAGS OR VALUES

Because the configuration loading allows specifying the same flags as in the CLI it is possible to set defaults. For example, assuming that a user always wants to install Ceph the following way (that doesn’t create/modify remote repo files):

ceph-deploy install --no-adjust-repos {nodes}

This can be the default behavior by setting it in the right section in the configuration file, which should look like this:

[ceph-deploy-install]
adjust_repos = False

The default for adjust_repos is True, but because we are changing this to False the CLI will now have this behavior changed without the need to pass any flag.

REPOSITORY SECTIONS

Keys will depend on the type of package manager that will use it. Certain keys for yum are required (like baseurl) and some others like gpgcheck are optional.

For both yum and apt these would be all the required keys in a repository section:

  • baseurl
  • gpgkey

If a required key is not present ceph-deploy will abort the installation process with an error identifying the section and key what was missing.

In yum the repository name is taken from the section, so if the section is [foo], then the name of the repository will be foo repo and the filename written to /etc/yum.repos.d/ will be foo.repo.

For apt, the same happens except the directory location changes to: /etc/apt/sources.list.d/ and the file becomes foo.list.

OPTIONAL VALUES FOR YUM

name: A descriptive name for the repository. If not provided {repo section} repo is used

enabled: Defaults to 1

gpgcheck: Defaults to 1

type: Defaults to rpm-md

gpgcheck: Defaults to 1

DEFAULT REPOSITORY

For installations where a default repository is needed a key can be added to that section to indicate it is the default one:

[myrepo]
default = true

When a default repository is detected it is mentioned in the log output and ceph will get install from that one repository at the end.

EXTRA REPOSITORIES

If other repositories need to be installed aside from the main one, a key should be added to represent that need with a comma separated value with the name of the sections of the other repositories (just like the example configuration file demonstrates):

[myrepo]
baseurl = https://user:pass@example.org/rhel6
gpgurl = https://example.org/keys/release.asc
default = True
extra-repos = cephrepo  # will install the cephrepo file too
 
[cephrepo]
name=ceph repo noarch packages
baseurl=http://ceph.com/rpm-emperor/el6/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

In this case, the repository called myrepo defines the extra-repos key with just one extra one: cephrepo.

This extra repository must exist as a section in the configuration file. After the main one is added all the extra ones defined will follow. Installation of Ceph will only happen with the main repository.

PKG

Provides a simple interface to install or remove packages on a remote host (or a number of remote hosts).

Packages to install or remove must be comma separated when there are more than one package in the argument.

Note

 

This feature only supports installing on same distributions. You cannot install a given package on different distributions at the same time.

–INSTALL

This flag will use the package (or packages) passed in to perform an installation using the distribution package manager in a non-interactive way. Package managers that tend to ask for confirmation will not prompt.

An example call to install a few packages on 2 hosts (with hostnames like node1 and node2) would look like:

ceph-deploy pkg --install vim,zsh node1 node2
[ceph_deploy.cli][INFO  ] Invoked (1.3.3): /bin/ceph-deploy pkg --install vim,zsh node1 node2
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.pkg][INFO  ] Distro info: Ubuntu 12.04 precise
[node1][INFO  ] installing packages on node1
[node1][INFO  ] Running command: sudo env DEBIAN_FRONTEND=noninteractive apt-get -q install --assume-yes vim zsh
...

–REMOVE

This flag will use the package (or packages) passed in to remove them using the distribution package manager in a non-interactive way. Package managers that tend to ask for confirmation will not prompt.

An example call to remove a few packages on 2 hosts (with hostnames like node1 and node2) would look like:

[ceph_deploy.cli][INFO  ] Invoked (1.3.3): /bin/ceph-deploy pkg --remove vim,zsh node1 node2
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.pkg][INFO  ] Distro info: Ubuntu 12.04 precise
[node1][INFO  ] removing packages from node1
[node1][INFO  ] Running command: sudo apt-get -q remove -f -y --force-yes -- vim zsh
...

REPO

Provides a simple interface for installing or removing new Apt or RPM repo files.

Apt repo files are added in /etc/apt/sources.list.d, while RPM repo files are added in /etc/yum.repos.d.

INSTALLING REPOS

Repos can be defined through CLI arguments, or they can be defined in cephdeploy.conf and referenced by name.

The general format for adding a repo is:

ceph-deploy repo --repo-url <repo_url> --gpg-url <optional URL to GPG key> <repo-name> <host> [host [host ...]]

As an example of adding the Ceph rpm-hammer repo for EL7:

ceph-deploy repo --repo-url http://ceph.com/rpm-hammer/el7/x86_64/ --gpg-url 'https://download.ceph.com/keys/release.asc' ceph HOST1

In this example, the repo-name is ceph, and the file /etc/yum.repos.d/ceph.repo will be created. Because --gpg-url was passed, the repo will have gpgcheck=1 and will reference the given GPG key.

For APT, the equivalent example would be:

ceph-deploy repo --repo-url http://ceph.com/debian-hammer --gpg-url 'https://download.ceph.com/keys/release.asc' ceph HOST1

If a repo was defined in cephdeploy.conf, like the following:

[ceph-mon]
name=Ceph-MON
baseurl=https://cephmirror.com/hammer/el7/x86_64
gpgkey=https://cephmirror.com/release.asc
gpgcheck=1
proxy=_none_

This could be installed with this command:

ceph-deploy repo ceph-mon HOST1

ceph-deploy repo will always check to see if a matching repo name exists in cephdeploy.conf first.

It is possible that repos may be password protected, and a URL may be structured like so:

https://<user>:<password>@host.com/...

In this case, Apt repositories will be created with mode 0600 to make sure the password is not world-readable. You can also use the CEPH_DEPLOY_REPO_URL and CEPH_DEPLOY_GPG_URL environment variables in lieu of --repo-url and --gpg-url to avoid placing sensitive credentials on the command line (and thus visible in the process table).

Note

 

The writing of a repo file as mode 0600 when a password is present is only done for Apt repos currently.

REMOVING

Repos are simply removed by name. The general format for adding a repo is:

ceph-deploy repo --remove <repo-name> <host> [host [host...]]

To remove a repo at /etc/yum.repos.d/ceph.repo, do:

ceph-deploy repo --remove ceph HOST1

New in version 1.5.27.

ADMIN

The admin subcommand provides an interface to add to the cluster’s admin node.

EXAMPLE

To make a node and admin node run:

ceph-deploy admin ADMIN [ADMIN..]

This places the the cluster configuration and the admin keyring on the remote nodes.

ADMIN NODE DEFINITION

The definition of an admin node is that both the cluster configuration file and the admin keyring. Both of these files are stored in the directory /etc/ceph and thier prefix is that of the cluster name.

The default ceph cluster name is “ceph”. So with a cluster with a default name the admin keyring is named /etc/ceph/ceph.client.admin.keyring while cluster configuration file is named /etc/ceph/ceph.conf.

GATHERKEYS

The gatherkeys subcommand provides an interface to get with a cluster’s cephx bootstrap keys.

KEYRINGS

The gatherkeys subcommand retrieves the following keyrings.

CEPH.MON.KEYRING

This keyring is used by all mon nodes to communicate with other mon nodes.

CEPH.CLIENT.ADMIN.KEYRING

This keyring is ceph client commands by default to administer the ceph cluster.

CEPH.BOOTSTRAP-OSD.KEYRING

This keyring is used to generate cephx keyrings for OSD instances.

CEPH.BOOTSTRAP-MDS.KEYRING

This keyring is used to generate cephx keyrings for MDS instances.

CEPH.BOOTSTRAP-RGW.KEYRING

This keyring is used to generate cephx keyrings for RGW instances.

EXAMPLE

The gatherkeys subcommand contacts the mon and creates or retrieves existing keyrings from the mon internal store. To run:

ceph-deploy gatherkeys MON [MON..]

You can optionally add as many mon nodes to the command line as desired. The gatherkeys subcommand will succeed on the first mon to respond successfully with all the keyrings.

BACKING UP OF OLD KEYRINGS

If old keyrings exist in the current working directory that do not match the retrieved keyrings these old keyrings will be renamed with a time stamp extention so you will not loose valuable keyrings.

Note

 

Before version v1.5.33 ceph-deploy relied upon ceph-create-keys and did not backup existing keys. Using ceph-create-keys produced a side effect of deploying all bootstrap keys on the mon node so making all mon nodes admin nodes.

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值