Configure Two Node Highly-Available Cluster Using KVM Fencing on RHEL7

https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/configure_two_node_highly_available_cluster_using_kvm_fencing_on_rhel7?lang=en

This article will walk through the steps required to build a highly-available Apache cluster (httpd) on RHEL 7. In the Red Hat Enterprise Linux 7, the cluster stack has moved to Pacemaker/Corosync, with a new command line tool to manage the cluster (pcs, replacing commands such as ccs and clusvcadm in earlier releases).

 

Click the below link for the Red Hat High Availability Add-On Underlying Technology Changes:

https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/red_hat_high_availability_add_on_underlying_technology_changes1?lang=en


Our Cluster configuration scenario:

The cluster will be a two node cluster comprising nodes rhel7dns01 and rhel7dns02, and iSCSI shared storage will be presented from node munshi. There will be a 2GB LUN presented for shared storage.

In our scenario, munshi machine is our KVM Host & rhel7dns01 & rhel7dns02 VM Guest Host.

 

To verify some basic configuration on both node:

[root@rhel7dns02 nodes]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

192.168.122.100 rhel7dns01.example.com rhel7dns01

192.168.122.200 rhel7dns02.example.com rhel7dns02

192.168.122.1 munshi.example.com munshi        # KVM HOST AND iSCSI TARGET

 

Some Crosscheck before Starting the Cluster Configuration:

[root@rhel7dns02 nodes]# ifconfig eth2

eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

inet 192.168.122.200 netmask 255.255.255.0 broadcast 192.168.122.255

inet6 fe80::5054:ff:fe09:dc6a prefixlen 64 scopeid 0x20<link>

ether 52:54:00:09:dc:6a txqueuelen 1000 (Ethernet)

RX packets 26758 bytes 10359461 (9.8 MiB)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 16562 bytes 1892743 (1.8 MiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

 

[root@rhel7dns01 ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

192.168.122.100 rhel7dns01.example.com rhel7dns01

192.168.122.200 rhel7dns02.example.com rhel7dns02

192.168.122.1 munshi.example.com munshi

 

[root@rhel7dns01 ~]# ifconfig eth0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500

inet 192.168.122.100 netmask 255.255.255.0 broadcast 192.168.122.255

inet6 fe80::5054:ff:fe23:5378 prefixlen 64 scopeid 0x20<link>

ether 52:54:00:23:53:78 txqueuelen 1000 (Ethernet)

RX packets 27976 bytes 11457960 (10.9 MiB)

RX errors 0 dropped 0 overruns 0 frame 0

TX packets 20403 bytes 13574743 (12.9 MiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

 

 

To confirm the firewall is running under FirewallD control on both node:

[root@rhel7dns01 nodes]# firewall-cmd --state

running

[root@rhel7dns02 nodes]# firewall-cmd --state

running

 

[root@rhel7dns01 ~]# firewall-cmd --permanent --add-service=high-availability

[root@rhel7dns01 ~]# firewall-cmd --reload

 

[root@rhel7dns02 nodes]# firewall-cmd --permanent --add-service=high-availability

[root@rhel7dns02 nodes]# firewall-cmd --reload

 

To install the appropriate packages for cluster setup:

[root@rhel7dns01 ~]# yum -y install pacemaker corosync fence-agents-all fence-agents-virsh fence-virt pcs dlm lvm2-cluster gfs2-utils iscsi-initiator-utils httpd wget

 

To verify the installed packages on both node:

[root@rhel7dns02 ~]# rpm -qa|egrep "pacemaker|corosync|fence-agents-all|fence-agents-virsh|fence-virt|pcs|dlm|lvm2-cluster|gfs2-utils"

pacemaker-libs-1.1.12-22.el7.x86_64

fence-agents-all-4.0.11-10.el7.x86_64

gfs2-utils-3.1.7-6.el7.x86_64

corosynclib-2.3.4-4.el7.x86_64

pacemaker-cli-1.1.12-22.el7.x86_64

pacemaker-1.1.12-22.el7.x86_64

fence-virt-0.3.2-1.el7.x86_64

corosync-2.3.4-4.el7.x86_64

dlm-lib-4.0.2-5.el7.x86_64

lvm2-cluster-2.02.115-3.el7.x86_64

dlm-4.0.2-5.el7.x86_64

fence-agents-virsh-4.0.11-10.el7.x86_64

pacemaker-cluster-libs-1.1.12-22.el7.x86_64

pcs-0.9.137-13.el7.x86_64

 

How to configure iSCSI Target on RHEL7 or RHEL6:

Please go to the below link for those configuration.

RHEL7: https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/configure_iscsi_target_initiator_on_rhel7_or_powerlinux?lang=en

RHEL6: https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/configuring_iscsi_target_on_rhel_6_4_powerlinux?lang=en

 

To configure iSCSI initiator on both node as a shared storage:

[root@rhel7dns01 ~]# cat /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.2008-09.com.example:server.target5

 

[root@rhel7dns01 ~]# systemctl start iscsi.service

[root@rhel7dns01 ~]# iscsiadm -m discovery -t st -p munshi.example.com

192.168.122.1:3260,1 iqn.2008-09.com.example:server.target5

 

[root@rhel7dns01 ~]# iscsiadm -m node -T iqn.2008-09.com.example:server.target5 -l

Logging in to [iface: default, target: iqn.2008-09.com.example:server.target5, portal: 192.168.122.1,3260] (multiple)

Login to [iface: default, target: iqn.2008-09.com.example:server.target5, portal: 192.168.122.1,3260] successful.

 

To Verify the Shared Disk on Both Node:

[root@rhel7dns01 ~]# fdisk -l

:::::::::::::::::::::::::::::::::::: CUT SOME OUTPUT ::::::::::::::::::::::::::::::::::::

Disk /dev/sdb: 1073 MB, 1073741824 bytes, 2097152 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

Disk /dev/sda: 2147 MB, 2147483648 bytes, 4194304 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

 

Pre Activity for the Cluster Configuration:

To change or set a password for the hacluster user on both node:

[root@rhel7dns01 ~]# grep hacluster /etc/passwd

hacluster:x:189:189:cluster user:/home/hacluster:/sbin/nologin

 

[root@rhel7dns01 ~]# echo password|passwd --stdin hacluster

Changing password for user hacluster.

passwd: all authentication tokens updated successfully.

 

[root@rhel7dns02 ~]# grep hacluster /etc/passwd

hacluster:x:189:189:cluster user:/home/hacluster:/sbin/nologin

 

[root@rhel7dns02 ~]# echo password|passwd --stdin hacluster

Changing password for user hacluster.

passwd: all authentication tokens updated successfully.

 

To start the pcsd.service and enable this as a start-up service:

[root@rhel7dns01 ~]# systemctl enable pcsd.service

ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/multi-user.target.wants/pcsd.service'

 

[root@rhel7dns01 ~]# systemctl start pcsd.service

[root@rhel7dns01 ~]# systemctl is-active pcsd.service

active

 

[root@rhel7dns02 ~]# systemctl enable pcsd.service

ln -s '/usr/lib/systemd/system/pcsd.service' '/etc/systemd/system/multi-user.target.wants/pcsd.service'

 

[root@rhel7dns02 ~]# systemctl start pcsd.service

[root@rhel7dns02 ~]# systemctl is-active pcsd.service

active

 

To authenticate pcs to pcsd on nodes specified:

Syntax: pcs cluster auth [node] [...] [-u username] [-p password] [--local] [--force]

 

[root@rhel7dns01 ~]# pcs cluster auth rhel7dns01.example.com rhel7dns02.example.com

Username: hacluster

Password:

rhel7dns01.example.com: Authorized

rhel7dns02.example.com: Authorized

 

To check, some file will populated under the /var/lib/pcsd directory

[root@rhel7dns01 ~]# ls -la /var/lib/pcsd/

total 24

drwx------. 2 root root 94 Apr 9 06:01 .

drwxr-xr-x. 55 root root 4096 Apr 9 05:43 ..

-rwx------. 1 root root 60 Apr 9 05:54 pcsd.cookiesecret

-rwx------. 1 root root 1224 Apr 9 05:54 pcsd.crt

-rwx------. 1 root root 1679 Apr 9 05:54 pcsd.key

-rw-r--r--. 1 root root 362 Apr 9 06:02 pcs_users.conf

-rw-------. 1 root root 132 Apr 9 06:02 tokens

[root@rhel7dns01 ~]# date

Thu Apr 9 06:03:53 EDT 2015

 

To Start the Cluster Configuration:

To create and configure two node cluster:

Syntax: pcs cluster setup [--start] [--local] [--enable] --name <cluster name> <node1[,node1-altaddr]> [node2[,node2-altaddr]] [..] [--transport

<udpu|udp>] [--rrpmode active|passive] [--addr0 <addr/net> [[[--mcast0 <address>] [--mcastport0 <port>] [--ttl0 <ttl>]] | [--broad‐

cast0]] [--addr1 <addr/net> [[[--mcast1 <address>] [--mcastport1 <port>] [--ttl1 <ttl>]] | [--broadcast1]]]] [--wait_for_all=<0|1>]

[--auto_tie_breaker=<0|1>] [--last_man_standing=<0|1> [--last_man_standing_window=<time in ms>]] [--token <timeout>] [--join <time‐

out>] [--consensus <timeout>] [--miss_count_const <count>] [--fail_recv_const <failures>]

 

[root@rhel7dns01 ~]# pcs cluster setup --start --name webcluster01 rhel7dns01.example.com rhel7dns02.example.com

Shutting down pacemaker/corosync services...

Redirecting to /bin/systemctl stop pacemaker.service

Redirecting to /bin/systemctl stop corosync.service

Killing any remaining services...

Removing all cluster configuration files...

rhel7dns01.example.com: Succeeded

rhel7dns01.example.com: Starting Cluster...

rhel7dns02.example.com: Succeeded

rhel7dns02.example.com: Starting Cluster...

 

To enable the cluster service to start automatically at boot:

Syntax: pcs cluster setup start [--all] [node] [...]

 

[root@rhel7dns01 ~]# pcs cluster enable --all

rhel7dns01.example.com: Cluster Enabled

rhel7dns02.example.com: Cluster Enabled

 

[root@rhel7dns01 ~]# pcs cluster status

Cluster Status:

Last updated: Thu Apr 9 06:21:58 2015

Last change: Thu Apr 9 06:18:53 2015 via crmd on rhel7dns01.example.com

Stack: corosync

Current DC: rhel7dns01.example.com (2) - partition with quorum

Version: 1.1.10-29.el7-368c726

2 Nodes configured

0 Resources configured

 

PCSD Status:

rhel7dns01.example.com: Online

rhel7dns02.example.com: Online

 

Fencing Devices Configuration:

To configure the Fencing device for this cluster:

We can get details configuration of a particular fance agent using below coammand:

 

1. fence_virsh:

[root@rhel7dns02 /]# pcs stonith describe fence_virsh

Stonith options for: fence_scsi

ipport: TCP/UDP port to use for connection with device

ipaddr (required): IP Address or Hostname

:::::::::::::::::::::::::::::::::::: CUT SOME OUTPUT ::::::::::::::::::::::::::::::::::::

pcmk_host_check: How to determin which machines are controlled by the device.

 

Here we are going to use fence_virsh agent. And going to verify the communication between fence agent & KVM host:

[root@rhel7dns01 ~]# fence_virsh -a 192.168.122.1 -l root -p Mhhaque0987 -n rhel7 -o status

Status: ON

 

To Configure fence_virsh:

Syntax: pcs stonith create <stonith id> <stonith device type> [stonith device options]

 

[root@rhel7dns01 ~]# pcs stonith create fence_dns01_xvm fence_virsh ipaddr="192.168.122.1" login="root" passwd="Mhhaque0987" port="rhel7" pcmk_host_list="rhel7dns01.example.com"

 

[root@rhel7dns01 ~]# pcs stonith create fence_dns02_xvm fence_virsh ipaddr="192.168.122.1" login="root" passwd="Mhhaque0987" port="rhel7-2" pcmk_host_list="rhel7dns02.example.com"

 

2. fence_xvm:

We need to do some aditional package installation & configuration require for fence_xvm fenceing. Below link will help us to do that.

https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/how_to_configure_red_hat_cluster_with_fencing_of_two_kvm_guests_running_on_two_ibm_powerkvm_hosts?lang=en

 

After that, we need to verify & configure fence_xvm fencing for our cluster. Run below command on KVM host:

[root@munshi ~]# fence_xvm -o list

rhel7 eb40d852-e371-4dd5-a098-14fe701da2cd on

rhel7-2 650285dc-08cb-4fee-b84b-0bdcacf68091 on

:::::::::::::::::::::::::::::::::::: CUT SOME OUTPUT ::::::::::::::::::::::::::::::::::::

 

Run below command on VM Guest machine:

[root@rhel7dns01 ~]# fence_xvm -o list

Id Name State

----------------------------------------------------

5 rhel7 running

6 rhel7-2 running

:::::::::::::::::::::::::::::::::::: CUT SOME OUTPUT ::::::::::::::::::::::::::::::::::::

rhel6.4 shut off

 

To verify the communication between fence agent & KVM host:

[root@rhel7dns01 ~]# fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H rhel7-2 -o status

Status: ON

[root@rhel7dns01 ~]# fence_xvm -a 225.0.0.12 -k /etc/cluster/fence_xvm.key -H rhel7 -o status

 

To configure fence_xvm:

[root@rhel7dns01 ~]# pcs stonith create fence_pcmk1_xvm fence_xvm port="rhel7" pcmk_host_list="rhel7dns01.example.com"

[root@rhel7dns01 ~]# pcs stonith create fence_pcmk2_xvm fence_xvm port="rhel7-2" pcmk_host_list="rhel7dns02.example.com"

 

Cluster Resources & Resurce Group:

1. FileSystem:

We are going to create a file-system using that shared disk::

[root@rhel7dns02 /]# fdisk /dev/disk/by-id/wwn-0x60000000000000000e00000000010001

Welcome to fdisk (util-linux 2.23.2).

 

Changes will remain in memory only, until you decide to write them.

Be careful before using the write command.

 

Device does not contain a recognized partition table

Building a new DOS disklabel with disk identifier 0x7a795bc4.

 

Command (m for help): n

Partition type:

p primary (0 primary, 0 extended, 4 free)

e extended

Select (default p): p

:::::::::::::::::::::::::::::::::::: CUT SOME OUTPUT ::::::::::::::::::::::::::::::::::::

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

 

On the other node, run partprobe so that the new partition is visible without the need to reboot:

[root@rhel7dns01 ~]# partprobe

 

To create filesystem:

[root@rhel7dns02 /]# ls -l /dev/disk/by-id/

:::::::::::::::::::::::::::::::::::: CUT SOME OUTPUT ::::::::::::::::::::::::::::::::::::

lrwxrwxrwx. 1 root root 9 Apr 9 07:35 wwn-0x60000000000000000e00000000010001 -> ../../sda

lrwxrwxrwx. 1 root root 10 Apr 9 07:35 wwn-0x60000000000000000e00000000010001-part1 -> ../../sda1

lrwxrwxrwx. 1 root root 9 Apr 10 05:15 /dev/disk/by-id/wwn-0x60000000000000000e00000000010002 -> ../../sdb

 

[root@rhel7dns02 /]# mkfs.ext4 /dev/disk/by-id/wwn-0x60000000000000000e00000000010001-part1

mke2fs 1.42.9 (28-Dec-2013)

:::::::::::::::::::::::::::::::::::: CUT SOME OUTPUT ::::::::::::::::::::::::::::::::::::

Allocating group tables: done

Writing inode tables: done

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

 

Temporarily mount the new filesystem on one node, and separately populate the /var/ww directory for testing, remembering to unmount once done:

 

Mount and create the sub directory as per the apache service required:

[root@rhel7dns02 /]# ls /var/www/

cgi-bin html error

 

[root@rhel7dns02 /]# mount /dev/disk/by-id/wwn-0x60000000000000000e00000000010001-part1 /var/www/

 

[root@rhel7dns02 /]# mkdir /var/www/{cgi-bin,html,error}

 

[root@rhel7dns02 /]# ls /var/www/

cgi-bin error html lost+found

 

Now, we are going to create a cluster file system resource wwwfs_rso into our existing cluster in a new resource group named httpdgrp which will be used to group the resources together as one unit:

 

[root@rhel7dns02 /]# pcs resource create wwwfs_rso Filesystem device="/dev/disk/by-id/wwn-0x60000000000000000e00000000010001-part1" directory="/var/www" fstype="ext4" --group httpdgrp

 

[root@rhel7dns02 /]# pcs resource show

Resource Group: httpdgrp

wwwfs_rso (ocf::heartbeat:Filesystem): Started

 

[root@rhel7dns01 ~]# pcs resource show wwwfs_rso

Resource: wwwfs_rso (class=ocf provider=heartbeat type=Filesystem)

Attributes: device=/dev/disk/by-id/wwn-0x60000000000000000e00000000010001-part1 directory=/var/www fstype=ext4

Operations: start interval=0s timeout=60 (wwwfs_rso-start-timeout-60)

stop interval=0s timeout=60 (wwwfs_rso-stop-timeout-60)

monitor interval=20 timeout=40 (wwwfs_rso-monitor-interval-20)

 

Now, we are going to configure a cluster virtual IP resource vip_rso into our existing cluster in a existing resource group named httpdgrp.

 

2. Virtual IP or Service IP Address:

Add an IPaddr2 address resource – will be the floating virtual IP address that will failover between the nodes.

 

[root@rhel7dns02 /]# pcs resource create vip_rso IPaddr2 ip=192.168.122.111 cidr_netmask=24 --group httpdgrp

 

[root@rhel7dns01 ~]# pcs resource show vip_rso

Resource: vip_rso (class=ocf provider=heartbeat type=IPaddr2)

Attributes: ip=192.168.122.111 cidr_netmask=24

Operations: start interval=0s timeout=20s (vip_rso-start-timeout-20s)

stop interval=0s timeout=20s (vip_rso-stop-timeout-20s)

monitor interval=10s timeout=20s (vip_rso-monitor-interval-10s)

 

3. Adding Application (Web Server)

The Apache cluster health check uses the Apache server-status handler, so add the following to your httpd.conf

[root@rhel7dns01 ~]# tail /etc/httpd/conf/httpd.conf

# Supplemental configuration

#

# Load config files in the "/etc/httpd/conf.d" directory, if any.

IncludeOptional conf.d/*.conf

<Location /server-status>

SetHandler server-status

Order deny,allow

Deny from all

Allow from 127.0.0.1

</Location>

 

Cluster will monitor Apache Service Up & Running using the Service Status.

[root@rhel7dns02 /]# pcs resource create httpd_rso apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status" --group httpdgrp

[root@rhel7dns02 /]# pcs resource show

Resource Group: httpdgrp

wwwfs_rso (ocf::heartbeat:Filesystem): Started

vip_rso (ocf::heartbeat:IPaddr2): Started

httpd_rso (ocf::heartbeat:apache): Started

 

Open the firewall on both nodes to allow HTTP access from the client:

[root@rhel7dns01 ~]# firewall-cmd --add-service=http --permanent

[root@rhel7dns01 ~]# firewall-cmd --reload

 

HA Cluster Checking & Fail-over Testing:

[root@rhel7dns02 /]# pcs status

Cluster name: webcluster01

Last updated: Fri Apr 10 09:02:26 2015

Last change: Fri Apr 10 09:01:26 2015 via cibadmin on rhel7dns01.example.com

Stack: corosync

Current DC: rhel7dns01.example.com (1) - partition with quorum

Version: 1.1.12-a14efad

2 Nodes configured

5 Resources configured

 

Online: [ rhel7dns01.example.com rhel7dns02.example.com ]

Full list of resources:

 

Resource Group: httpdgrp

wwwfs_rso (ocf::heartbeat:Filesystem): Started rhel7dns01.example.com

vip_rso (ocf::heartbeat:IPaddr2): Started rhel7dns01.example.com

httpd_rso (ocf::heartbeat:apache): Started rhel7dns01.example.com

fence_dns01_xvm (stonith:fence_virsh): Started rhel7dns02.example.com

fence_dns02_xvm (stonith:fence_virsh): Started rhel7dns02.example.com

 

PCSD Status:

rhel7dns01.example.com: Online

rhel7dns02.example.com: Online

 

Daemon Status:

corosync: active/enabled

pacemaker: active/enabled

pcsd: active/enabled

 

To move resource group to other node:

[root@rhel7dns01 ~]# pcs resource move httpdgrp rhel7dns02.example.com

[root@rhel7dns01 ~]# pcs status

Cluster name: webcluster01

Last updated: Fri Apr 10 09:04:52 2015

Last change: Fri Apr 10 09:04:51 2015 via cibadmin on rhel7dns01.example.com

Stack: corosync

Current DC: rhel7dns01.example.com (1) - partition with quorum

Version: 1.1.12-a14efad

2 Nodes configured

5 Resources configured

 

Online: [ rhel7dns01.example.com rhel7dns02.example.com ]

Full list of resources:

 

Resource Group: httpdgrp

wwwfs_rso (ocf::heartbeat:Filesystem): Started rhel7dns01.example.com

vip_rso (ocf::heartbeat:IPaddr2): Started rhel7dns01.example.com

httpd_rso (ocf::heartbeat:apache): Started rhel7dns01.example.com

fence_dns01_xvm (stonith:fence_virsh): Stopped

fence_dns02_xvm (stonith:fence_virsh): Stopped

 

PCSD Status:

rhel7dns01.example.com: Online

rhel7dns02.example.com: Online

 

Daemon Status:

corosync: active/enabled

pacemaker: active/enabled

pcsd: active/enabled

 

[root@rhel7dns01 ~]# pcs status

Cluster name: webcluster01

Last updated: Fri Apr 10 09:04:55 2015

Last change: Fri Apr 10 09:04:51 2015 via cibadmin on rhel7dns01.example.com

Stack: corosync

Current DC: rhel7dns01.example.com (1) - partition with quorum

Version: 1.1.12-a14efad

2 Nodes configured

5 Resources configured

 

Online: [ rhel7dns01.example.com rhel7dns02.example.com ]

Full list of resources:

 

Resource Group: httpdgrp

wwwfs_rso (ocf::heartbeat:Filesystem): Started rhel7dns02.example.com

vip_rso (ocf::heartbeat:IPaddr2): Started rhel7dns02.example.com

httpd_rso (ocf::heartbeat:apache): Started rhel7dns02.example.com

fence_dns01_xvm (stonith:fence_virsh): Started rhel7dns01.example.com

fence_dns02_xvm (stonith:fence_virsh): Started rhel7dns01.example.com

 

PCSD Status:

rhel7dns01.example.com: Online

rhel7dns02.example.com: Online

 

Daemon Status:

corosync: active/enabled

pacemaker: active/enabled

pcsd: active/enabled

 

Hope this document will help you to configure two node cluster for your application on KVM or PowerKVM. Please feel free to contact with us for any further concern.

Questioning is the best way to learn. So always, You all are welcome to leave your comments.


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值