suse ha for sap scale-up性能优化场景安装配置

1. 安装SUSE操作系统

官网下载SUSE Linux Enterprise Server for SAP Applications安装介质,在安装操作系统过程中,选择SUSE Linux Enterprise Server for SAP Applications操作系统。
在这里插入图片描述

在软件选择界面,根据需要选择SAP HANA Server Base,SAP Application Server Base,High Availability等组件。
在这里插入图片描述

安装好操作系统后,可以看到相关的sap和ha包:
# rpm -qa |grep pattern |grep sap
patterns-sles-sap_server-32bit-12-10.1.x86_64
patterns-sap-hana-12.3-6.11.1.x86_64
patterns-sles-sap_server-12-10.1.x86_64

# rpm -qa |grep -i sap
sap-locale-32bit-1.0-92.4.x86_64
yast2-sap-scp-1.0.3-11.2.noarch
patterns-sles-sap_server-32bit-12-10.1.x86_64
SLES_SAP-release-DVD-12.5-1.130.x86_64
patterns-sap-hana-12.3-6.11.1.x86_64
sap-locale-1.0-92.4.x86_64
yast2-saptune-1.3-3.4.2.noarch
sles4sap-white-papers-1.0-1.1.noarch
yast2-sap-ha-1.0.5-2.10.noarch
SLES_SAP-release-12.5-1.130.x86_64
saptune-2.0.1-3.3.1.x86_64
cyrus-sasl-gssapi-2.1.26-8.7.1.x86_64
patterns-sles-sap_server-12-10.1.x86_64
clamsap-0.99.25-1.8.x86_64
sap-netscape-link-0.1-1.2.noarch
saprouter-systemd-0.2-1.1.noarch
SAPHanaSR-0.153.2-3.8.2.noarch
sap-installation-wizard-3.1.81.20-3.15.1.x86_64
cyrus-sasl-gssapi-32bit-2.1.26-8.7.1.x86_64
yast2-sap-scp-prodlist-1.0.4-5.6.1.noarch
sapconf-4.1.14-40.56.3.noarch

# rpm -qa |grep -i cluster
yast2-cluster-3.4.1-9.8.noarch
cluster-md-kmp-default-4.12.14-120.1.x86_64
ha-cluster-bootstrap-0.5-3.6.2.noarch
cluster-glue-1.0.12+v1.git.1485976882.03d61cd-3.8.1.x86_64

# rpm -qa |grep -i ha

sle-ha-install-quick_en-12.4-1.3.noarch
nautilus-share-0.7.3-11.81.x86_64
hardlink-1.0-6.45.x86_64
yast2-hana-firewall-1.1.5-1.5.x86_64
libHalf11-2.1.0-2.14.x86_64
libenchant1-1.6.0-21.107.x86_64
perl-Tie-IxHash-1.23-3.19.noarch
patterns-sap-hana-12.3-6.11.1.x86_64
haveged-1.9.1-16.1.x86_64
libharfbuzz0-32bit-1.4.5-7.5.x86_64
libxcb-shape0-1.10-4.3.1.x86_64
HANA-Firewall-1.1.6-1.17.noarch
shared-mime-info-1.6-11.3.x86_64
libharfbuzz0-1.4.5-7.5.x86_64
patterns-ha-ha_sles-12-15.7.x86_64
yast2-sap-ha-1.0.5-2.10.noarch
gucharmap-3.18.2-3.4.x86_64
gucharmap-lang-3.18.2-3.4.noarch
perl-Crypt-SmbHash-0.12-156.12.x86_64
libthai-data-0.1.25-4.2.x86_64
sharutils-lang-4.11.1-14.64.x86_64
sharutils-4.11.1-14.64.x86_64
libthai0-32bit-0.1.25-4.2.x86_64
release-notes-ha-12.5.20191017-1.2.noarch
python-chardet-3.0.4-5.3.2.noarch
nautilus-share-lang-0.7.3-11.81.noarch
libthai0-0.1.25-4.2.x86_64
perl-Digest-SHA1-2.13-17.216.x86_64
ha-cluster-bootstrap-0.5-3.6.2.noarch
sle-ha-manuals_en-12.3-1.3.noarch
libgucharmap_2_90-7-3.18.2-3.4.x86_64
hawk2-2.1.0+git.1539075484.48179981-3.3.1.x86_64
yast2-metapackage-handler-3.1.4-3.3.noarch
libhavege1-1.9.1-16.1.x86_64
yast2-hardware-detection-3.1.8-1.39.x86_64
SAPHanaSR-0.153.2-3.8.2.noarch
libharfbuzz-icu0-1.4.5-7.5.x86_64
shadow-4.2.1-34.20.x86_64

2. 安装HANA数据库

分别在主备节点上安装HANA数据库。

# ./hdbsetup

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述


3.配置HANA主备库数据复制

1)备份主数据库。
hdbadm@hanadb01:/usr/sap/HDB/HDB00> hdbsql -u SYSTEM -d SYSTEMDB -i 00 “BACKUP DATA FOR FULL SYSTEM USING FILE (‘backup’)”

2)在主节点上启用系统复制。
hdbnsutil -sr_enable --name=site1

nameserver is active, proceeding ...
successfully enabled system as system replication source site
done.

检查主节点的复制配置信息。
hdbnsutil -sr_stateConfiguration --sapcontrol=1

SAPCONTROL-OK: <begin>
mode=primary
site id=1
site name=site1
SAPCONTROL-OK: <end>
done.

3)注册备节点。

停止备数据库。
hdbadm@hanadb02:/usr/sap/HDB/HDB00> HDB stop

在HANA 2.0中,系统复制是以加密方式运行,因此需要复制主节点的key文件到备节点。
cd /usr/sap/<SID>/SYS/global/security/rsecssfs
rsync -va hanadb01:/usr/sap/<SID>/SYS/global/security/rsecssfs/data/SSFS_<SID>.DAT ./data/
rsync -va hanadb01:/usr/sap/<SID>/SYS/global/security/rsecssfs/key/SSFS_<SID>.KEY ./key/

编辑主备机的global.ini文件(/hana/shared/<SID>/global/hdb/custom/config/global.ini)配置HANA使用专门的复制IP网段进行数据复制。
[system_replication_hostname_resolution]
192.168.1.207 = hanadb01
192.168.1.205 = hanadb02

注册备节点。
hdbadm@hanadb02:/usr/sap/HDB/HDB00> hdbnsutil -sr_register --name=site2 --remoteHost=hanadb01 --remoteInstance=00 --replicationMode=sync --operationMode=delta_datashipping

adding site ...
collecting information ...
updating local ini files ...
done.

启动备数据库。
hdbadm@hanadb02:/usr/sap/HDB/HDB00> HDB start

检查系统复制状态。
hdbadm@hanadb02:/usr/sap/HDB/home> HDBSettings.sh systemReplicationStatus.py --sapcontrol=1
SAPCONTROL-OK:
site/2/REPLICATION_MODE=SYNC
site/2/SITE_NAME=site2
site/2/SOURCE_SITE_ID=1
site/2/PRIMARY_MASTERS=hanadb01
local_site_id=2
SAPCONTROL-OK: <end

在主节点检查复制状态。
hdbadm@hanadb01:/usr/sap/HDB/HDB00> hdbnsutil -sr_state

System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~

online: true

mode: primary
operation mode: primary
site id: 1
site name: site1

is source system: true
is secondary/consumer system: false
has secondaries/consumers attached: true
is a takeover active: false
is primary suspended: false

Host Mappings:
~~~~~~~~~~~~~~

hanadb01 -> [site2] hanadb02
hanadb01 -> [site1] hanadb01


Site Mappings:
~~~~~~~~~~~~~~
site1 (primary/primary)
    |---site2 (sync/delta_datashipping)

Tier of site1: 1
Tier of site2: 2

Replication mode of site1: primary
Replication mode of site2: sync

Operation mode of site1: primary
Operation mode of site2: delta_datashipping

Mapping: site1 -> site2

Hint based routing site: 
done.>

在备节点上检查复制的状态。
hdbadm@hanadb02:/usr/sap/HDB/home> hdbnsutil -sr_state

System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~

online: true

mode: sync
operation mode: delta_datashipping
site id: 2
site name: site2

is source system: false
is secondary/consumer system: true
has secondaries/consumers attached: false
is a takeover active: false
is primary suspended: false
is timetravel enabled: false
replay mode: auto
active primary site: 1

primary masters: hanadb01

Host Mappings:
~~~~~~~~~~~~~~

hanadb02 -> [site2] hanadb02
hanadb02 -> [site1] hanadb01


Site Mappings:
~~~~~~~~~~~~~~
site1 (primary/primary)
    |---site2 (sync/delta_datashipping)

Tier of site1: 1
Tier of site2: 2

Replication mode of site1: primary
Replication mode of site2: sync

Operation mode of site1: primary
Operation mode of site2: delta_datashipping

Mapping: site1 -> site2

Hint based routing site: 
done.

切换测试
停止主数据库
hdbadm@hanadb01:/usr/sap/HDB/HDB00> HDB stop

在备节点上切换数据库为主库
hdbadm@hanadb02:/usr/sap/HDB/home> hdbnsutil -sr_takeover
hdbadm@hanadb02:/usr/sap/HDB/home> hdbnsutil -sr_state

System Replication State
~~~~~~~~~~~~~~~~~~~~~~~~

online: true

mode: primary
operation mode: primary
site id: 2
site name: site2

is source system: true
is secondary/consumer system: false
has secondaries/consumers attached: false
is a takeover active: false
is primary suspended: false

Host Mappings:
~~~~~~~~~~~~~~

hanadb02 -> [site2] hanadb02


Site Mappings:
~~~~~~~~~~~~~~
site2 (primary/primary)

Tier of site2: 1

Replication mode of site2: primary

Operation mode of site2: primary


Hint based routing site: 
done.

注册原主节点为备数据库。
hdbadm@hanadb01:/usr/sap/HDB/HDB00> hdbnsutil -sr_register --name=site1 --remoteHost=hanadb02 --remoteInstance=00 --replicationMode=sync --operationMode=delta_datashipping

在原主节点上启动数据库。
hdbadm@hanadb01:/usr/sap/HDB/HDB00> HDB start

查看复制状态。
hdbnsutil -sr_state

重复同样的步骤,将原主节点的数据库切换为主数据库,重建主备关系。


4.安装SAP Host Agent

# SAPCAR -xvf SAPHOSTAGENT60_60-80004822.SAR
# ./saphostexec -install


参考:Installing SAP Host Agent Manually


5.配置HANA HA/DR Provider

此步骤是强制性的,如果备节点与主节点不同步,将立即通知集群。当备节点不同步时,SAP HANA会在某个时间点上使用HA/DR提供程序接口调用此hook。通常情况下,这是在释放第一个待提交时发生的。当系统复制恢复时,SAP HANA将再次调用此hook。

1)编辑global.ini( /hana/shared/<SID>/global/hdb/custom/config/global.ini)文件,增加以下行:

[ha_dr_provider_SAPHanaSR]
provider = SAPHanaSR
path = /usr/share/SAPHanaSR
execution_order = 1

[trace]
ha_dr_saphanasr = info

2)编辑/etc/sudoers文件,允许用户<sid>adm访问集群,<sid>是小写。
# SAPHanaSR-ScaleUp entries for writing srHook cluster attribute
<sid>adm ALL=(ALL) NOPASSWD: /usr/sbin/crm_attribute -n hana_<sid>site_srHook*


6.配置集群

6.1.使用图形界面进行配置

注:使用图形界面进行配置时,必须要配置共享磁盘作为SBD fencing机制,否则请使用命令行进行集群的配置。另外,图形界面安装配置集群时会同时配置主备数据库的主备关系,因此不需要提前做主备数据库的主备配置。
# yast2
在这里插入图片描述
在这里插入图片描述

6.2.使用命令行配置集群

1)在主节点上初始化集群。

hanadb01:~ # ha-cluster-init

  Generating SSH key
  Configuring csync2
  Generating csync2 shared key (this may take a while)...done
  csync2 checking files...done
  
Configure Corosync:
  This will configure the cluster messaging layer.  You will need
  to specify a network address over which to communicate (default
  is em4's network, but you can use the network address of any
  active interface).

  Network address to bind to (e.g.: 192.168.1.0) [192.168.100.0]
  Multicast address (e.g.: 239.x.x.x) [239.205.185.119]
  Multicast port [5405]
  
Configure SBD:
  If you have shared storage, for example a SAN or iSCSI target,
  you can use it avoid split-brain scenarios by configuring SBD.
  This requires a 1 MB partition, accessible to all nodes in the
  cluster.  The device path must be persistent and consistent
  across all nodes in the cluster, so /dev/disk/by-id/* devices
  are a good choice.  Note that all data on the partition you
  specify here will be destroyed.

Do you wish to use SBD (y/n)? n
WARNING: Not configuring SBD - STONITH will be disabled.
  Hawk cluster interface is now running. To see cluster status, open:
    https://192.168.100.207:7630/
  Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
  Waiting for cluster........done
  Loading initial cluster configuration
  
Configure Administration IP Address:
  Optionally configure an administration virtual IP
  address. The purpose of this IP address is to
  provide a single IP that can be used to interact
  with the cluster, rather than using the IP address
  of any specific cluster node.

Do you wish to configure a virtual IP address (y/n)? n
  Done (log saved to /var/log/ha-cluster-bootstrap.log)

2)在备节点上加入集群。
hanadb02:~ # ha-cluster-join -c hanadb01 -i eth3

  Retrieving SSH keys - This may prompt for root@hanadb01:
Password: 
  One new SSH key installed
  Configuring csync2...done
  Merging known_hosts
  Probing for new partitions...done
  Hawk cluster interface is now running. To see cluster status, open:
    https://192.168.100.205:7630/
  Log in with username 'hacluster', password 'linux'
WARNING: You should change the hacluster password to something more secure!
  Waiting for cluster....done
  Reloading cluster configuration...done
  Done (log saved to /var/log/ha-cluster-bootstrap.log)

3)检查HA服务的状态,为集群增加冗余的通讯链路。
systemctl status pacemaker
yast2 cluster
在这里插入图片描述

注:在SUSE 12 SP5中,如果在pacemaker启动时其中一个ring链路不通,pacemaker就会无法启动,在messages日志中报以下错误:

2023-07-04T10:52:46.084460+08:00 hanadb02 corosync[42440]:   [TOTEM ] One of your ip addresses are now bound to localhost. Corosync would not work correctly.
2023-07-04T10:34:22.167023+08:00 hanadb02 corosync[47138]: Starting Corosync Cluster Engine (corosync): [FAILED]
2023-07-04T10:34:22.167434+08:00 hanadb02 systemd[1]: corosync.service: Control process exited, code=exited status=1
2023-07-04T10:34:22.168118+08:00 hanadb02 systemd[1]: Failed to start Corosync Cluster Engine.
2023-07-04T10:34:22.168403+08:00 hanadb02 systemd[1]: Dependency failed for Pacemaker High Availability Cluster Manager.
2023-07-04T10:34:22.168691+08:00 hanadb02 systemd[1]: pacemaker.service: Job pacemaker.service/start failed with result 'dependency'.

4)定义集群引导选项、资源和操作的默认值。
# vi crm-bs.txt
property $id=“cib-bootstrap-options”
stonith-enabled=“true”
stonith-action=“reboot”
stonith-timeout=“150s”
rsc_defaults $id=“rsc-options”
resource-stickiness=“1000”
migration-threshold=“5000”
op_defaults $id=“op-options”
timeout=“600”
# crm configure load update crm-bs.txt

5)定义IPMI作为fencing机制
# vi ipmi.txt
primitive rsc_hanadb01_stonith_ipmi stonith:external/ipmi
params hostname=hanadb01 ipaddr=192.168.100.206 userid=root passwd=calvin interface=lanplus
op monitor interval=1800 timeout=30

primitive rsc_hanadb02_stonith_ipmi stonith:external/ipmi
params hostname=hanadb02 ipaddr=192.168.100.204 userid=root passwd=calvin interface=open
op monitor interval=1800 timeout=30

# crm configure load update ipmi.txt


6)定义hana拓扑资源

# vi crm-saphanatop.txt

primitive rsc_SAPHanaTopology_HDB_HDB00 ocf:suse:SAPHanaTopology
op monitor interval=“10” timeout=“600”
op start interval=“0” timeout=“600”
op stop interval=“0” timeout=“300”
params SID=“HDB” InstanceNumber=“00”
clone cln_SAPHanaTopology_HDB_HDB00 rsc_SAPHanaTopology_HDB_HDB00
meta clone-node-max=“1” interleave=“true”

# crm configure load update crm-saphanatop.txt

7)定义hana数据库资源
# vi crm-saphana.txt

primitive rsc_SAPHana_HDB_HDB00 ocf:suse:SAPHana
op start interval=“0” timeout=“3600”
op stop interval=“0” timeout=“3600”
op promote interval=“0” timeout=“3600”
op monitor interval=“60” role=“Master” timeout=“700”
op monitor interval=“61” role=“Slave” timeout=“700”
params SID=“HDB” InstanceNumber=“00” PREFER_SITE_TAKEOVER=“true”
DUPLICATE_PRIMARY_TIMEOUT=“7200” AUTOMATED_REGISTER=“false”
ms msl_SAPHana_HDB_HDB00 rsc_SAPHana_HDB_HDB00
meta clone-max=“2” clone-node-max=“1” interleave=“true”

# crm configure load update crm-saphana.txt

8)定义浮动IP资源
# vi crm-vip.txt

primitive rsc_ip_HDB_HDB00 ocf💓IPaddr2
op monitor interval=“10s” timeout=“20s”
params ip=“192.168.100.203”

# crm configure load update crm-vip.txt

9)定义浮动IP的位置(与数据库绑定)和HANA拓扑与数据库资源的启动顺序。
# vi crm-cs.txt

colocation col_saphana_ip_HDB_HDB00 2000: rsc_ip_HDB_HDB00:Started
msl_SAPHana_HDB_HDB00:Master
order ord_SAPHana_HDB_HDB00 Optional: cln_SAPHanaTopology_HDB_HDB00
msl_SAPHana_HDB_HDB00

# crm configure load update crm-cs.txt


7.切换数据库测试

7.1 使用HA切换HANA数据库

在主节点上执行切换操作:
hanadb01:/hana/prop # crm resource move rsc_SAPHana_HDB_HDB00 force

INFO: Move constraint created for rsc_SAPHana_HDB_HDB00

hanadb01:/hana/prop # crm status

Stack: corosync
Current DC: hanadb01 (version 1.1.21+20190809.bf34b44fa-1.17-1.1.21+20190809.bf34b44fa) - partition with quorum
Last updated: Wed Jun 21 16:48:22 2023
Last change: Wed Jun 21 16:48:13 2023 by root via crm_resource on hanadb01

2 nodes configured
7 resources configured

Online: [ hanadb01 hanadb02 ]

Full list of resources:

 rsc_hanadb01_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 rsc_hanadb02_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 Clone Set: cln_SAPHanaTopology_HDB_HDB00 [rsc_SAPHanaTopology_HDB_HDB00]
     Started: [ hanadb01 hanadb02 ]
 Master/Slave Set: msl_SAPHana_HDB_HDB00 [rsc_SAPHana_HDB_HDB00]
     rsc_SAPHana_HDB_HDB00      (ocf::suse:SAPHana):    Stopping hanadb01
     Slaves: [ hanadb02 ]
 rsc_ip_HDB_HDB00       (ocf::heartbeat:IPaddr2):       Started hanadb02

hanadb01:/hana/prop # crm status

Stack: corosync
Current DC: hanadb01 (version 1.1.21+20190809.bf34b44fa-1.17-1.1.21+20190809.bf34b44fa) - partition with quorum
Last updated: Wed Jun 21 16:48:43 2023
Last change: Wed Jun 21 16:48:31 2023 by root via crm_attribute on hanadb02

2 nodes configured
7 resources configured

Online: [ hanadb01 hanadb02 ]

Full list of resources:

 rsc_hanadb01_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 rsc_hanadb02_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 Clone Set: cln_SAPHanaTopology_HDB_HDB00 [rsc_SAPHanaTopology_HDB_HDB00]
     Started: [ hanadb01 hanadb02 ]
 Master/Slave Set: msl_SAPHana_HDB_HDB00 [rsc_SAPHana_HDB_HDB00]
     rsc_SAPHana_HDB_HDB00      (ocf::suse:SAPHana):    Promoting hanadb02
     Stopped: [ hanadb01 ]
 rsc_ip_HDB_HDB00       (ocf::heartbeat:IPaddr2):       Started hanadb02

hanadb01:/hana/prop # crm status

Stack: corosync
Current DC: hanadb01 (version 1.1.21+20190809.bf34b44fa-1.17-1.1.21+20190809.bf34b44fa) - partition with quorum
Last updated: Wed Jun 21 16:50:19 2023
Last change: Wed Jun 21 16:49:20 2023 by root via crm_attribute on hanadb02

2 nodes configured
7 resources configured

Online: [ hanadb01 hanadb02 ]

Full list of resources:

 rsc_hanadb01_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 rsc_hanadb02_stonith_ipmi      (stonith:external/ipmi):        Started hanadb01
 Clone Set: cln_SAPHanaTopology_HDB_HDB00 [rsc_SAPHanaTopology_HDB_HDB00]
     Started: [ hanadb01 hanadb02 ]
 Master/Slave Set: msl_SAPHana_HDB_HDB00 [rsc_SAPHana_HDB_HDB00]
     Masters: [ hanadb02 ]
     Stopped: [ hanadb01 ]
 rsc_ip_HDB_HDB00       (ocf::heartbeat:IPaddr2):       Started hanadb02

在新备节点上重建与新主数据库的复制关系:
hdbnsutil -sr_register --name=site1 --remoteHost=hanadb02 --remoteInstance=10 --replicationMode=sync --operationMode=delta_datashipping

清除资源的constraint规则,HA会自动在备节点上启动数据库:
crm resource clear msl_SAPHana_HDB_HDB00

INFO: Removed migration constraints for msl_SAPHana_HDB_HDB00

查看节点角色:
# SAPHanaSR-showAttr --format=script | SAPHanaSR-filter --search=‘roles’

Fri Jun 30 16:03:02 2023; Hosts/hanadb01/roles=4:S:master1:master:worker:master
Fri Jun 30 16:03:02 2023; Hosts/hanadb02/roles=4:P:master1:master:worker:master

7.2. 使用SAP命令切换HANA数据库

让HANA数据库资源进入维护模式
crm resource maintenance msl_SAPHana_HDB_HDB00

在主节点上停止HANA数据库
HDB stop

在备节点上接管数据库
hdbnsutil -sr_takeover

在原主节点上重建主备复制关系
hdbnsutil -sr_register --name=site1 --remoteHost=hanadb02 --remoteInstance=10 --replicationMode=sync --operationMode=delta_datashipping

在原主节点上启动数据库
HDB start

让集群更新资源的状态
crm resource refresh msl_SAPHana_HDB_HDB00

让HANA数据库资源退出维护模式
crm resource maintenance msl_SAPHana_HDB_HDB00 off


8. 让节点进入与退出维护模式

节点进入维护模式后,HA不会自动启动和停止该节点上的资源。

hanadb01:~ # crm node show

hanadb01(1084777679): member
        hana_ha1_vhost=hanadb01 hana_ha1_site=site1 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb02 lpa_ha1_lpt=10 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off
hanadb02(1084777677): member
        hana_ha1_vhost=hanadb02 hana_ha1_site=site2 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb01 lpa_ha1_lpt=1688350881 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off

在这里插入图片描述

hanadb01:~ # crm node maintenace hanadb01
hanadb01:~ # crm node show

hanadb01(1084777679): member
        hana_ha1_vhost=hanadb01 hana_ha1_site=site1 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb02 lpa_ha1_lpt=10 hana_ha1_op_mode=delta_datashipping maintenance=on standby=off
hanadb02(1084777677): member
        hana_ha1_vhost=hanadb02 hana_ha1_site=site2 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb01 lpa_ha1_lpt=1688350881 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off

在这里插入图片描述

hanadb01:~ # crm node ready hanadb01
hanadb01:~ # crm node show

hanadb01(1084777679): member
        hana_ha1_vhost=hanadb01 hana_ha1_site=site1 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb02 lpa_ha1_lpt=10 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off
hanadb02(1084777677): member
        hana_ha1_vhost=hanadb02 hana_ha1_site=site2 hana_ha1_srmode=sync hana_ha1_remoteHost=hanadb01 lpa_ha1_lpt=1688350881 hana_ha1_op_mode=delta_datashipping maintenance=off standby=off

9. 清除备节点的资源的失败状态

# crm resource refresh rsc_SAPHana_HDB_HDB00 hanadb02

# crm resource cleanup rsc_SAPHana_HDB_HDB00 hanadb02

参考:《SAP HANA System Replication Scale-Up Performance Optimized Scenario》

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值