How to Install GlusterFS

Prerequisites

  • At least two machines, each of them has at least 1G memory.
  • At least two disks, one for the base OS and one that we will use as a Gluster “brick”.

Installing Gluster

Notice: The following steps need run on each nodes.

1. Download the packages (modify URL for your specific release and architecture).

 http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.5/RHEL/epel-6.5/x86_64/

2. Download the dependency package

 ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/opensuse/repositories/home:/billcavalieri:/QEMU/RedHat_RHEL-6/x86_64/xfsprogs-3.1.1-4.1.x86_64.rpm

3. Install the packages

 # rpm -e openssl-1.0.0-27.el6_4.2.x86_64 --nodeps
 # rpm -ivh openssl-1.0.1e-16.el6_5.7.x86_64.rpm –nodeps
 # yum install xfsprogs-3.1.1-4.1.x86_64.rpm
 # yum install glusterfs-libs-3.5.2-1.el6.x86_64.rpm glusterfs-cli-3.5.2-1.el6.x86_64.rpm
 # yum install glusterfs-api-3.5.2-1.el6.x86_64.rpm glusterfs-3.5.2-1.el6.x86_64.rpm glusterfs-fuse-3.5.2-1.el6.x86_64.rpm glusterfs-server-3.5.2-1.el6.x86_64.rp

4. Start the services, and set to start at boot

 # /etc/init.d/glusterd start
 # chkconfig glusterd on

Pre-Configuring

Notice: the following steps need run on both nodes.

1. Check the current mode of SELinux.

 # getenforce

If the current mode is “enforcing” mode, need to change it to either “permissive” or “disabled” mode. Otherwise, skip this step.

 # setenforce  0

Change “/etc/selinux/config” and make “SELINUX=permissive” or “SELINUX=disabled” in it to make the SELinux changes permanent.

2. Enable server can communicate with each other.

3. Partition the disk, assuming that you are a brick at /dev/vdb.

3.1. fdisk /dev/vdb and create a single partition:

 # fdisk /dev/vdb
 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
 Building a new DOS disklabel with disk identifier 0xa0399863.
 Changes will remain in memory only, until you decide to write them.
 After that, of course, the previous content won't be recoverable.
 Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
 WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
 switch off the mode (command 'c') and change display units to sectors (command 'u').
 Command (m for help):

3.2. Press n,then press Enter.The following information is displayed:

 Command action
    e   extended
    p   primary partition (1-4)

3.3. Press p and press Enter. The following information is displayed:

  Partition number (1-4):

3.4. Press 1 and press Enter. The following information is displayed:

 First cylinder (1-2080, default 1):

3.5. Press Enter and continue. The following information is displayed:

 Using default value 1
 Last cylinder, +cylinders or +size{K,M,G} (1-2080, default 2080):

3.6. Press Enter. The following information is displayed:

 Using default value 2080
     Command (m for help):

3.7. Press p to check and then press w to save.

 Command (m for help): p
 Disk /dev/vdb: 1073 MB, 1073741824 bytes
 16 heads, 63 sectors/track, 2080 cylinders
 Units = cylinders of 1008 * 512 = 516096 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk identifier: 0xa0399863
 Device Boot      Start         End      Blocks   Id  System
 /dev/vdb1               1        2080     1048288+  83  Linux
 Command (m for help): w

4. Format the partition:

 # mkfs.xfs -f -i size=512 -n size=8192 -d su=64k,sw=4 /dev/vdb1

5. Mount the partition:

 # mkdir -p /export/disk1 && mount -t xfs  -o  noatime,inode64 /dev/vdb1  /export/disk1 && mkdir -p /export/disk1/brick

6. Add en entry to /etc/fstab:

 # echo "/dev/vdb1 /export/disk1 xfs defaults,noatime,inode64 0 0"  >> /etc/fstab

Configuring Gluster Server

Choose a server to be your "primary" node. The following steps just only run in "primary" node. 1. Configure the trusted pool. Remember that the trusted pool is the term used to define a cluster of nodes in Gluster.

 # gluster peer probe <hostname_of_node2>
   peer probe: success.

2. Set up a replicated volume:

 # gluster volume create gv0 replica 2 <hostname_of_node1>:/export/disk1/brick <hostname_of_node2>:/export/disk1/brick

3. Check the volume we created

 # gluster volume info

And you should see results similar to the following:

 Volume Name: gv0
 Type: Replicate
 Volume ID: 8bc3e96b-a1b6-457d-8f7a-a91d1d4dc019
 Status: Created
 Number of Bricks: 1 x 2 = 2
 Transport-type: tcp
 Bricks:
 Brick1: node01:/export/disk1/brick
 Brick2: node02:/export/disk1/brick

4. Start the volume

 # gluster volume start gv0

5. Set the volume options.(Option)

 # gluster volume set gv0 performance.cache-size 16GB
 # gluster volume set gv0 performance.io-thread-count 64
 # gluster volume set gv0 performance.cache-refresh-timeout 60
 # gluster volume set gv0 performance.client-io-threads on
 # gluster volume set gv0 eager-lock on

Configuring Gluster Client

Notice: the following steps need run on both nodes.

1. Add the FUSE loadable kernel module(LKM) to the Linux kernel:

 # modprobe fuse

2. Verify that the FUSE module is loaded:

 # dmesg |grep fuse
 fuse init (API version 7.13)

3. Create the mount directory:

 # mkdir /bmsccontents

4. Mount glusterfs.

 # mount -t glusterfs <hostname_node_1>:/gv0 /bmsccontents

FAQ

1. Gluster can’t start if the old info exists.

 # cd /var/lib/glusterd/
 # rm -rf glusterd.info
 # rm -rf vols/*
 # cd /var/lib/glusterd/glustershd
 # rm -rf glustershd-server.vol
 # /etc/init.d/glusterd restart

2. Stop & delete volume

 # gluster volume stop gv0 force
 # gluster volume delete gv0

3. Expanding Volumes
1. On the first server in the cluster, probe the server to which you want to add the new brick using the following command:

 # gluster peer probe HOSTNAME

2. Add the brick using the following command:

 # gluster volume add-brick gv0  <hostname_of_node>:/export/disk1/brick

3. Check the volume information using the following command:

 # gluster volume info

4. Re-balance the volume to ensure that all files are visible at the mount point.

 # gluster volume rebalance gv0 start

4. Path or a prefix of it is already part of a volume
1. For the directory (or any parent directories) that was formerly part of a volume. Don't worry if it says that the attribute does not exist. As long as it doesn't exist, you're in good shape.

 # setfattr -x trusted.glusterfs.volume-id $brick_path
 # setfattr -x trusted.gfid $brick_path
 # rm -rf $brick_path/.glusterfs

2. Finally, restart glusterd to ensure it's not "remembering" the old bricks.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值