使用ceph-deploy安装Ceph

INSTALLATION (CEPH-DEPLOY)

 

STEP 1: PREFLIGHT

Ceph Client and a Ceph Node may require some basic configuration work prior to deploying a Ceph Storage Cluster. You can also avail yourself of help by getting involved in the Ceph community.

STEP 2: STORAGE CLUSTER

Once you have completed your preflight checklist, you should be able to begin deploying a Ceph Storage Cluster.

STEP 3: CEPH CLIENT(S)

Most Ceph users don’t store objects directly in the Ceph Storage Cluster. They typically use at least one of Ceph Block Devices, the Ceph Filesystem, and Ceph Object Storage.

PREFLIGHT CHECKLIST

The ceph-deploy tool operates out of a directory on an admin node. Any host with network connectivity and a modern python environment and ssh (such as Linux) should work.

In the descriptions below, Node refers to a single machine.

 

CEPH-DEPLOY SETUP

Add Ceph repositories to the ceph-deploy admin node. Then, install ceph-deploy.

DEBIAN/UBUNTU

For Debian and Ubuntu distributions, perform the following steps:

  1. Add the release key:
  1. wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
  1. Add the Ceph packages to your repository. Use the command below and replace {ceph-stable-release} with a stable Ceph release (e.g., luminous.) For example:
  1. echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
  1. Update your repository and install ceph-deploy:
  1. sudo apt update
  2. sudo apt install ceph-deploy

Note

 

You can also use the EU mirror eu.ceph.com for downloading your packages by replacing https://ceph.com/ by http://eu.ceph.com/

RHEL/CENTOS

For CentOS 7, perform the following steps:

  1. On Red Hat Enterprise Linux 7, register the target machine with subscription-manager, verify your subscriptions, and enable the “Extras” repository for package dependencies. For example:
  1. sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
  1. Install and enable the Extra Packages for Enterprise Linux (EPEL) repository:
  1. sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Please see the EPEL wiki page for more information.

  1. Add the Ceph repository to your yum configuration file at /etc/yum.repos.d/ceph.repo with the following command. Replace {ceph-stable-release} with a stable Ceph release (e.g., luminous.) For example:
  1. cat << EOM > /etc/yum.repos.d/ceph.repo
  2. [ceph-noarch]
  3. name=Ceph noarch packages
  4. baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
  5. enabled=1
  6. gpgcheck=1
  7. type=rpm-md
  8. gpgkey=https://download.ceph.com/keys/release.asc
  9. EOM
  1. Update your repository and install ceph-deploy:
  1. sudo yum update
  2. sudo yum install ceph-deploy

Note

 

You can also use the EU mirror eu.ceph.com for downloading your packages by replacing https://ceph.com/ by http://eu.ceph.com/

OPENSUSE

The Ceph project does not currently publish release RPMs for openSUSE, but a stable version of Ceph is included in the default update repository, so installing it is just a matter of:

sudo zypper install ceph
sudo zypper install ceph-deploy

If the distro version is out-of-date, open a bug at https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of the following repositories:

  1. Hammer:
  1. https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
  1. Jewel:
  1. https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph

CEPH NODE SETUP

The admin node must have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.

INSTALL NTP

We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to prevent issues arising from clock drift. See Clock for details.

On CentOS / RHEL, execute:

sudo yum install ntp ntpdate ntp-doc

On Debian / Ubuntu, execute:

sudo apt install ntp

Ensure that you enable the NTP service. Ensure that each Ceph Node uses the same NTP time server. See NTP for details.

INSTALL SSH SERVER

For ALL Ceph Nodes perform the following steps:

  1. Install an SSH server (if necessary) on each Ceph Node:
  1. sudo apt install openssh-server

or:

sudo yum install openssh-server
  1. Ensure the SSH server is running on ALL Ceph Nodes.

CREATE A CEPH DEPLOY USER

The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

Recent versions of ceph-deploy support a --username option so you can specify any user that has password-less sudo (including root, although this is NOT recommended). To use ceph-deploy --username {username}, the user you specify must have password-less SSH access to the Ceph node, as ceph-deploy will not prompt you for a password.

We recommend creating a specific user for ceph-deploy on ALL Ceph nodes in the cluster. Please do NOT use “ceph” as the user name. A uniform user name across the cluster may improve ease of use (not required), but you should avoid obvious user names, because hackers typically use them with brute force hacks (e.g., root, admin, {productname}). The following procedure, substituting {username} for the user name you define, describes how to create a user with passwordless sudo.

Note

 

Starting with the Infernalis release the “ceph” user name is reserved for the Ceph daemons. If the “ceph” user already exists on the Ceph nodes, removing the user must be done before attempting an upgrade.

  1. Create a new user on each Ceph Node.
  1. ssh user@ceph-server
  2. sudo useradd -d /home/{username} -m {username}
  3. sudo passwd {username}
  1. For the new user you added to each Ceph node, ensure that the user has sudo privileges.
  1. echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
  2. sudo chmod 0440 /etc/sudoers.d/{username}

ENABLE PASSWORD-LESS SSH

Since ceph-deploy will not prompt for a password, you must generate SSH keys on the admin node and distribute the public key to each Ceph node. ceph-deploy will attempt to generate the SSH keys for initial monitors.

  1. Generate the SSH keys, but do not use sudo or the root user. Leave the passphrase empty:
  1. ssh-keygen
  2.  
  3. Generating public/private key pair.
  4. Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
  5. Enter passphrase (empty for no passphrase):
  6. Enter same passphrase again:
  7. Your identification has been saved in /ceph-admin/.ssh/id_rsa.
  8. Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
  1. Copy the key to each Ceph Node, replacing {username} with the user name you created with Create a Ceph Deploy User.
  1. ssh-copy-id {username}@node1
  2. ssh-copy-id {username}@node2
  3. ssh-copy-id {username}@node3
  1. (Recommended) Modify the ~/.ssh/config file of your ceph-deploy admin node so that ceph-deploy can log in to Ceph nodes as the user you created without requiring you to specify --username {username} each time you execute ceph-deploy. This has the added benefit of streamlining ssh and scp usage. Replace {username} with the user name you created:
  1. Host node1
  2.    Hostname node1
  3.    User {username}
  4. Host node2
  5.    Hostname node2
  6.    User {username}
  7. Host node3
  8.    Hostname node3
  9.    User {username}

ENABLE NETWORKING ON BOOTUP

Ceph OSDs peer with each other and report to Ceph Monitors over the network. If networking is off by default, the Ceph cluster cannot come online during bootup until you enable networking.

The default configuration on some distributions (e.g., CentOS) has the networking interface(s) off by default. Ensure that, during boot up, your network interface(s) turn(s) on so that your Ceph daemons can communicate over the network. For example, on Red Hat and CentOS, navigate to /etc/sysconfig/network-scripts and ensure that the ifcfg-{iface} file has ONBOOT set to yes.

ENSURE CONNECTIVITY

Ensure connectivity using ping with short hostnames (hostname -s). Address hostname resolution issues as necessary.

Note

 

Hostnames should resolve to a network IP address, not to the loopback IP address (e.g., hostnames should resolve to an IP address other than 127.0.0.1). If you use your admin node as a Ceph node, you should also ensure that it resolves to its hostname and IP address (i.e., not its loopback IP address).

OPEN REQUIRED PORTS

Ceph Monitors communicate using port 6789 by default. Ceph OSDs communicate in a port range of 6800:7300 by default. See the Network Configuration Reference for details. Ceph OSDs can use multiple network connections to communicate with clients, monitors, other OSDs for replication, and other OSDs for heartbeats.

On some distributions (e.g., RHEL), the default firewall configuration is fairly strict. You may need to adjust your firewall settings allow inbound requests so that clients in your network can communicate with daemons on your Ceph nodes.

For firewalld on RHEL 7, add the ceph-mon service for Ceph Monitor nodes and the ceph service for Ceph OSDs and MDSs to the public zone and ensure that you make the settings permanent so that they are enabled on reboot.

For example, on monitors:

sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent

and on OSDs and MDSs:

sudo firewall-cmd --zone=public --add-service=ceph --permanent

Once you have finished configuring firewalld with the --permanent flag, you can make the changes live immediately without rebooting:

sudo firewall-cmd --reload

For iptables, add port 6789 for Ceph Monitors and ports 6800:7300 for Ceph OSDs. For example:

sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT

Once you have finished configuring iptables, ensure that you make the changes persistent on each node so that they will be in effect when your nodes reboot. For example:

/sbin/service iptables save

TTY

On CentOS and RHEL, you may receive an error while trying to execute ceph-deploy commands. If requiretty is set by default on your Ceph nodes, disable it by executing sudo visudo and locate the Defaults requiretty setting. Change it to Defaults:ceph !requiretty or comment it out to ensure that ceph-deploy can connect using the user you created with Create a Ceph Deploy User.

Note

 

If editing, /etc/sudoers, ensure that you use sudo visudo rather than a text editor.

SELINUX

On CentOS and RHEL, SELinux is set to Enforcing by default. To streamline your installation, we recommend setting SELinux to Permissive or disabling it entirely and ensuring that your installation and cluster are working properly before hardening your configuration. To set SELinux to Permissive, execute the following:

sudo setenforce 0

To configure SELinux persistently (recommended if SELinux is an issue), modify the configuration file at /etc/selinux/config.

PRIORITIES/PREFERENCES

Ensure that your package manager has priority/preferences packages installed and enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to enable optional repositories.

sudo yum install yum-plugin-priorities

For example, on RHEL 7 server, execute the following to install yum-plugin-priorities and enable the rhel-7-server-optional-rpms repository:

sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms

SUMMARY

This completes the Quick Start Preflight. Proceed to the Storage Cluster Quick Start.

STORAGE CLUSTER QUICK START

If you haven’t completed your Preflight Checklist, do that first. This Quick Start sets up a Ceph Storage Cluster using ceph-deploy on your admin node. Create a three Ceph Node cluster so you can explore Ceph functionality.

 

As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three Ceph OSD Daemons. Once the cluster reaches a active + clean state, expand it by adding a fourth Ceph OSD Daemon, a Metadata Server and two more Ceph Monitors. For best results, create a directory on your admin node for maintaining the configuration files and keys that ceph-deploy generates for your cluster.

mkdir my-cluster
cd my-cluster

The ceph-deploy utility will output files to the current directory. Ensure you are in this directory when executing ceph-deploy.

Important

 

Do not call ceph-deploy with sudo or run it as root if you are logged in as a different user, because it will not issue sudo commands needed on the remote host.

STARTING OVER

If at any point you run into trouble and you want to start over, execute the following to purge the Ceph packages, and erase all its data and configuration:

ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*

If you execute purge, you must re-install Ceph. The last rm command removes any files that were written out by ceph-deploy locally during a previous installation.

CREATE A CLUSTER

On your admin node from the directory you created for holding your configuration details, perform the following steps using ceph-deploy.

  1. Create the cluster.
  1. ceph-deploy new {initial-monitor-node(s)}

Specify node(s) as hostname, fqdn or hostname:fqdn. For example:

ceph-deploy new node1

Check the output of ceph-deploy with ls and cat in the current directory. You should see a Ceph configuration file (ceph.conf), a monitor secret keyring (ceph.mon.keyring), and a log file for the new cluster. See ceph-deploy new -h for additional details.

  1. If you have more than one network interface, add the public network setting under the [global] section of your Ceph configuration file. See the Network Configuration Reference for details.
  1. public network = {ip-address}/{bits}

For example,:

public network = 10.1.2.0/24

to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.

  1. If you are deploying in an IPv6 environment, add the following to ceph.conf in the local directory:
  1. echo ms bind ipv6 = true >> ceph.conf
  1. Install Ceph packages.:
  1. ceph-deploy install {ceph-node} [...]

For example:

ceph-deploy install node1 node2 node3

The ceph-deploy utility will install Ceph on each node.

  1. Deploy the initial monitor(s) and gather the keys:
  1. ceph-deploy mon create-initial

Once you complete the process, your local directory should have the following keyrings:

    • ceph.client.admin.keyring
    • ceph.bootstrap-mgr.keyring
    • ceph.bootstrap-osd.keyring
    • ceph.bootstrap-mds.keyring
    • ceph.bootstrap-rgw.keyring
    • ceph.bootstrap-rbd.keyring

Note

 

If this process fails with a message similar to “Unable to find /etc/ceph/ceph.client.admin.keyring”, please ensure that the IP listed for the monitor node in ceph.conf is the Public IP, not the Private IP.

  1. Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
  1. ceph-deploy admin {ceph-node(s)}

For example:

ceph-deploy admin node1 node2 node3
  1. Deploy a manager daemon. (Required only for luminous+ builds):
  1. ceph-deploy mgr create node1  *Required only for luminous+ builds, i.e >= 12.x builds*
  1. Add three OSDs. For the purposes of these instructions, we assume you have an unused disk in each node called /dev/vdb. Be sure that the device is not currently in use and does not contain any important data.
  1. ceph-deploy osd create --data {device} {ceph-node}

For example:

ceph-deploy osd create --data /dev/vdb node1
ceph-deploy osd create --data /dev/vdb node2
ceph-deploy osd create --data /dev/vdb node3
  1. Check your cluster’s health.
  1. ssh node1 sudo ceph health

Your cluster should report HEALTH_OK. You can view a more complete cluster status with:

ssh node1 sudo ceph -s

EXPANDING YOUR CLUSTER

Once you have a basic cluster up and running, the next step is to expand cluster. Add a Ceph Metadata Server to node1. Then add a Ceph Monitor and Ceph Manager to node2 and node3 to improve reliability and availability.

 

ADD A METADATA SERVER

To use CephFS, you need at least one metadata server. Execute the following to create a metadata server:

ceph-deploy mds create {ceph-node}

For example:

ceph-deploy mds create node1

ADDING MONITORS

A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph Manager to run. For high availability, Ceph Storage Clusters typically run multiple Ceph Monitors so that the failure of a single Ceph Monitor will not bring down the Ceph Storage Cluster. Ceph uses the Paxos algorithm, which requires a majority of monitors (i.e., greather than N/2 where N is the number of monitors) to form a quorum. Odd numbers of monitors tend to be better, although this is not required.

Add two Ceph Monitors to your cluster:

ceph-deploy mon add {ceph-nodes}

For example:

ceph-deploy mon add node2 node3

Once you have added your new Ceph Monitors, Ceph will begin synchronizing the monitors and form a quorum. You can check the quorum status by executing the following:

ceph quorum_status --format json-pretty

Tip

 

When you run Ceph with multiple monitors, you SHOULD install and configure NTP on each monitor host. Ensure that the monitors are NTP peers.

ADDING MANAGERS

The Ceph Manager daemons operate in an active/standby pattern. Deploying additional manager daemons ensures that if one daemon or host fails, another one can take over without interrupting service.

To deploy additional manager daemons:

ceph-deploy mgr create node2 node3

You should see the standby managers in the output from:

ssh node1 sudo ceph -s

ADD AN RGW INSTANCE

To use the Ceph Object Gateway component of Ceph, you must deploy an instance of RGW. Execute the following to create an new instance of RGW:

ceph-deploy rgw create {gateway-node}

For example:

ceph-deploy rgw create node1

By default, the RGW instance will listen on port 7480. This can be changed by editing ceph.conf on the node running the RGW as follows:

[client]
rgw frontends = civetweb port=80

To use an IPv6 address, use:

[client]
rgw frontends = civetweb port=[::]:80

STORING/RETRIEVING OBJECT DATA

To store object data in the Ceph Storage Cluster, a Ceph client must:

  1. Set an object name
  2. Specify a pool

The Ceph Client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to a Ceph OSD Daemon dynamically. To find the object location, all you need is the object name and the pool name. For example:

ceph osd map {poolname} {object-name}

Exercise: Locate an Object

As an exercise, lets create an object. Specify an object name, a path to a test file containing some object data and a pool name using the rados put command on the command line. For example:

echo {Test-data} > testfile.txt
ceph osd pool create mytest 8
rados put {object-name} {file-path} --pool=mytest
rados put test-object-1 testfile.txt --pool=mytest

To verify that the Ceph Storage Cluster stored the object, execute the following:

rados -p mytest ls

Now, identify the object location:

ceph osd map {pool-name} {object-name}
ceph osd map mytest test-object-1

Ceph should output the object’s location. For example:

osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]

To remove the test object, simply delete it using the rados rm command.

For example:

rados rm test-object-1 --pool=mytest

To delete the mytest pool:

ceph osd pool rm mytest

(For safety reasons you will need to supply additional arguments as prompted; deleting pools destroys data.)

As the cluster evolves, the object location may change dynamically. One benefit of Ceph’s dynamic rebalancing is that Ceph relieves you from having to perform data migration or balancing manually.

BLOCK DEVICE QUICK START

To use this guide, you must have executed the procedures in the Storage Cluster Quick Start guide first. Ensure your Ceph Storage Cluster is in an active + clean state before working with the Ceph Block Device.

Note

 

The Ceph Block Device is also known as RBD or RADOS Block Device.

 

You may use a virtual machine for your ceph-client node, but do not execute the following procedures on the same physical node as your Ceph Storage Cluster nodes (unless you use a VM). See FAQ for details.

INSTALL CEPH

  1. Verify that you have an appropriate version of the Linux kernel. See OS Recommendations for details.
  1. lsb_release -a
  2. uname -r
  1. On the admin node, use ceph-deploy to install Ceph on your ceph-client node.
  1. ceph-deploy install ceph-client
  1. On the admin node, use ceph-deploy to copy the Ceph configuration file and the ceph.client.admin.keyring to the ceph-client.
  1. ceph-deploy admin ceph-client

The ceph-deploy utility copies the keyring to the /etc/ceph directory. Ensure that the keyring file has appropriate read permissions (e.g., sudo chmod +r /etc/ceph/ceph.client.admin.keyring).

CREATE A BLOCK DEVICE POOL

  1. On the admin node, use the ceph tool to create a pool (we recommend the name ‘rbd’).
  2. On the admin node, use the rbd tool to initialize the pool for use by RBD:
  1. rbd pool init <pool-name>

CONFIGURE A BLOCK DEVICE

  1. On the ceph-client node, create a block device image.
  1. rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
  1. On the ceph-client node, map the image to a block device.
  1. sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
  1. Use the block device by creating a file system on the ceph-client node.
  1. sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
  2.  
  3. This may take a few moments.
  1. Mount the file system on the ceph-client node.
  1. sudo mkdir /mnt/ceph-block-device
  2. sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
  3. cd /mnt/ceph-block-device
  1. Optionally configure the block device to be automatically mapped and mounted at boot (and unmounted/unmapped at shutdown) - see the rbdmap manpage.

See block devices for additional details.

CEPHFS QUICK START

To use the CephFS Quick Start guide, you must have executed the procedures in the Storage Cluster Quick Start guide first. Execute this quick start on the Admin Host.

PREREQUISITES

  1. Verify that you have an appropriate version of the Linux kernel. See OS Recommendations for details.
  1. lsb_release -a
  2. uname -r
  1. On the admin node, use ceph-deploy to install Ceph on your ceph-client node.
  1. ceph-deploy install ceph-client
  1. Ensure that the Ceph Storage Cluster is running and in an active + clean state. Also, ensure that you have at least one Ceph Metadata Server running.
  1. ceph -s [-m {monitor-ip-address}] [-k {path/to/ceph.client.admin.keyring}]

CREATE A FILESYSTEM

You have already created an MDS (Storage Cluster Quick Start) but it will not become active until you create some pools and a filesystem. See Create a Ceph filesystem.

ceph osd pool create cephfs_data <pg_num>
ceph osd pool create cephfs_metadata <pg_num>
ceph fs new <fs_name> cephfs_metadata cephfs_data

CREATE A SECRET FILE

The Ceph Storage Cluster runs with authentication turned on by default. You should have a file containing the secret key (i.e., not the keyring itself). To obtain the secret key for a particular user, perform the following procedure:

  1. Identify a key for a user within a keyring file. For example:
  1. cat ceph.client.admin.keyring
  1. Copy the key of the user who will be using the mounted CephFS filesystem. It should look something like this:
  1. [client.admin]
  2.    key = AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
  1. Open a text editor.
  2. Paste the key into an empty file. It should look something like this:
  1. AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
  1. Save the file with the user name as an attribute (e.g., admin.secret).
  2. Ensure the file permissions are appropriate for the user, but not visible to other users.

KERNEL DRIVER

Mount CephFS as a kernel driver.

sudo mkdir /mnt/mycephfs
sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs

The Ceph Storage Cluster uses authentication by default. Specify a user name and the secretfile you created in the Create a Secret File section. For example:

sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret

Note

 

Mount the CephFS filesystem on the admin node, not the server node. See FAQ for details.

FILESYSTEM IN USER SPACE (FUSE)

Mount CephFS as a Filesystem in User Space (FUSE).

sudo mkdir ~/mycephfs
sudo ceph-fuse -m {ip-address-of-monitor}:6789 ~/mycephfs

The Ceph Storage Cluster uses authentication by default. Specify a keyring if it is not in the default location (i.e., /etc/ceph):

sudo ceph-fuse -k ./ceph.client.admin.keyring -m 192.168.0.1:6789 ~/mycephfs

ADDITIONAL INFORMATION

See CephFS for additional information. CephFS is not quite as stable as the Ceph Block Device and Ceph Object Storage. See Troubleshooting if you encounter trouble.

CEPH OBJECT GATEWAY QUICK START

As of firefly (v0.80), Ceph Storage dramatically simplifies installing and configuring a Ceph Object Gateway. The Gateway daemon embeds Civetweb, so you do not have to install a web server or configure FastCGI. Additionally, ceph-deploy can install the gateway package, generate a key, configure a data directory and create a gateway instance for you.

Tip

 

Civetweb uses port 7480 by default. You must either open port 7480, or set the port to a preferred port (e.g., port 80) in your Ceph configuration file.

To start a Ceph Object Gateway, follow the steps below:

INSTALLING CEPH OBJECT GATEWAY

  1. Execute the pre-installation steps on your client-node. If you intend to use Civetweb’s default port 7480, you must open it using either firewall-cmd or iptables. See Preflight Checklist for more information.
  2. From the working directory of your administration server, install the Ceph Object Gateway package on the client-node node. For example:
  1. ceph-deploy install --rgw <client-node> [<client-node> ...]

CREATING THE CEPH OBJECT GATEWAY INSTANCE

From the working directory of your administration server, create an instance of the Ceph Object Gateway on the client-node. For example:

ceph-deploy rgw create <client-node>

Once the gateway is running, you should be able to access it on port 7480. (e.g., http://client-node:7480).

CONFIGURING THE CEPH OBJECT GATEWAY INSTANCE

  1. To change the default port (e.g., to port 80), modify your Ceph configuration file. Add a section entitled [client.rgw.<client-node>], replacing <client-node> with the short node name of your Ceph client node (i.e., hostname -s). For example, if your node name is client-node, add a section like this after the [global] section:
  1. [client.rgw.client-node]
  2. rgw_frontends = "civetweb port=80"

Note

 

Ensure that you leave no whitespace between port=<port-number> in the rgw_frontends key/value pair.

Important

 

If you intend to use port 80, make sure that the Apache server is not running otherwise it will conflict with Civetweb. We recommend to remove Apache in this case.

  1. To make the new port setting take effect, restart the Ceph Object Gateway. On Red Hat Enterprise Linux 7 and Fedora, run the following command:
  1. sudo systemctl restart ceph-radosgw.service

On Red Hat Enterprise Linux 6 and Ubuntu, run the following command:

sudo service radosgw restart id=rgw.<short-hostname>
  1. Finally, check to ensure that the port you selected is open on the node’s firewall (e.g., port 80). If it is not open, add the port and reload the firewall configuration. For example:
  1. sudo firewall-cmd --list-all
  2. sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
  3. sudo firewall-cmd --reload

See Preflight Checklist for more information on configuring firewall with firewall-cmd or iptables.

You should be able to make an unauthenticated request, and receive a response. For example, a request with no parameters like this:

http://<client-node>:80

Should result in a response like this:

<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Owner>
    <ID>anonymous</ID>
    <DisplayName></DisplayName>
  </Owner>
    <Buckets>
  </Buckets>
</ListAllMyBucketsResult>

See the Configuring Ceph Object Gateway guide for additional administration and API details.

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值