3.记录本地安装oushudb——安装oushudb

本文档详细介绍了如何在多节点环境中安装和配置OushuDB集群,包括修改配置文件、安装Hawq、调整系统参数、创建目录结构、设置权限、配置HDFS连接、初始化集群等关键步骤,确保OushuDB和Magma服务的正确运行。
摘要由CSDN通过智能技术生成

1.首先在oushum1,修改/usr/local/hawq/etc/slaves,将所有OushuDB的segment节点的hostname写入slaves中,在本次安装中,应该写入slaves的有oushus1和oushus2,slaves内容为:

oushus1
oushus2

2.在其他节点上安装hawq:

hawq ssh -h oushum2 -e "yum install -y hawq"
hawq ssh -f slaves -e "yum install -y hawq"

3.在oushum1节点上,在配置文件/etc/sysctl.conf添加内容:

kernel.shmmax = 1000000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 200000
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 10000 65535
net.core.netdev_max_backlog = 200000
net.netfilter.nf_conntrack_max = 524288
fs.nr_open = 3000000
kernel.threads-max = 798720
kernel.pid_max = 798720
# increase network
net.core.rmem_max=2097152
net.core.wmem_max=2097152
net.core.somaxconn=4096

4.拷贝oushum1上/etc/sysctl.conf中的配置文件到所有节点:

hawq scp -r -f hostfile /etc/sysctl.conf =:/etc/

5.在oushum1,使用“hawq ssh”执行下面的命令,使所有节点的/etc/sysctl.conf中的系统配置生效>:

hawq ssh -f hostfile -e "sysctl -p"

6.在oushum1,创建文件/etc/security/limits.d/gpadmin.conf:

* soft nofile 1048576
* hard nofile 1048576
* soft nproc 131072
* hard nproc 131072

7.拷贝oushum1上/etc/security/limits.d/gpadmin.conf中的配置文件到所有节点:

hawq scp -r -f hostfile /etc/security/limits.d/gpadmin.conf =:/etc/security/limits.d

8.在oushum1,在Hadoop上创建/hawq/default_filespace,并赋予gpadmin权限:

sudo -u hdfs hdfs dfs -mkdir -p /hawq/default_filespace
sudo -u hdfs hdfs dfs -chown -R gpadmin /hawq

9.在oushum1,创建mhostfile,记录所有hawq的master和standby master的hostname,类似hostfile:

touch mhostfile

mhostfile记录内容:

oushum1
oushum2

10.在oushum1,创建shostfile,记录所有hawq的segment的hostname,类似hostfile:

touch shostfile

shostfile记录内容:

oushus1
oushus2

11.在oushum1,使用“hawq ssh”在master和standby节点创建master元数据目录和临时文件目录,并授 予gpadmin权限:

#创建master元数据目录
hawq ssh -f mhostfile -e 'mkdir -p /data1/hawq/masterdd'

#创建临时文件目录
hawq ssh -f mhostfile -e 'mkdir -p /data1/hawq/tmp'
hawq ssh -f mhostfile -e 'mkdir -p /data2/hawq/tmp'

hawq ssh -f mhostfile -e 'chown -R gpadmin:gpadmin /data1/hawq'
hawq ssh -f mhostfile -e 'chown -R gpadmin:gpadmin /data2/hawq'

12.在oushum1,使用“hawq ssh”在所有segment创建segment元数据目录和临时文件目录,并授予gpadmin权限:

hawq ssh -f shostfile -e 'mkdir -p /data1/hawq/segmentdd'

#创建临时文件目录
hawq ssh -f shostfile -e 'mkdir -p /data1/hawq/tmp'
hawq ssh -f shostfile -e 'mkdir -p /data2/hawq/tmp'

hawq ssh -f shostfile -e 'chown -R gpadmin:gpadmin /data1/hawq'
hawq ssh -f shostfile -e 'chown -R gpadmin:gpadmin /data2/hawq'

13.在oushum1,切换hawq用户,hawq相关的配置文件都需要使用该用户权限;修改/usr/local/hawq/etc/hdfs-client.xml:(与hdfs类似,先去掉HA的注释):

<configuration>
 <property>
    <name>dfs.nameservices</name>
    <value>oushu</value>
 </property>
 <property>
    <name>dfs.ha.namenodes.oushu</name>
    <value>nn1,nn2</value>
 </property>
 <property>
    <name>dfs.namenode.rpc-address.oushu.nn1</name>
    <value>oushum2:9000</value>
 </property>
 <property>
    <name>dfs.namenode.rpc-address.oushu.nn2</name>
    <value>oushum1:9000</value>
 </property>
 <property>
    <name>dfs.namenode.http-address.oushu.nn1</name>
    <value>oushum2:50070</value>
 </property>
 <property>
    <name>dfs.namenode.http-address.oushu.nn2</name>
    <value>oushum1:50070</value>
 </property>
 ...
 <property>
    <name>dfs.domain.socket.path</name>
    <value>/var/lib/hadoop-hdfs/dn_socket</value>
    <description>Optional.  This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients.If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode.</description>
 </property>
 ...
</configuration>

14.在oushum1,修改/usr/local/hawq/etc/hawq-site.xml 注意:hawq_dfs_url中的oushu是dfs.nameservices的值,在hdfs-site.xml中配置;magma_nodes_url中的值最好取/usr/local/hawq/etc/slaves文件中的前两行:

<configuration>
 <property>
    <name>hawq_master_address_host</name>
    <value>oushum1</value>
 </property>
 ...
 <property>
    <name>hawq_standby_address_host</name>
    <value>oushum2</value>
    <description>The host name of hawq standby master.</description>
 </property>
 ...
 <property>
    <name>hawq_dfs_url</name>
    <value>oushu/hawq/default_filespace</value>
    <description>URL for accessing HDFS.</description>
 </property>
 <property>
    <name>magma_nodes_url</name>
    <value>oushus1:6666,oushus2:6666</value>
    <description>urls for accessing magma.</description>
 </property>
 <property>
    <name>hawq_master_directory</name>
    <value>/data1/hawq/masterdd</value>
    <description>The directory of hawq master.</description>
 </property>
 <property>
    <name>hawq_segment_directory</name>
    <value>/data1/hawq/segmentdd</value>
    <description>The directory of hawq segment.</description>
 </property>
 <property>
    <name>hawq_master_temp_directory</name>
    <value>/data1/hawq/tmp,/data2/hawq/tmp</value>
    <description>The temporary directory reserved for hawq master. NOTE: please DONOT add " " between directories. </description>
 </property>
 <property>
    <name>hawq_segment_temp_directory</name>
    <value>/data1/hawq/tmp,/data2/hawq/tmp</value>
    <description>The temporary directory reserved for hawq segment. NOTE: please DONOT add " " between directories. </description>
 </property>
 <property>
    <name>default_table_format</name>
    <value>appendonly</value>
    <description>Sets the default tableformat when creating table</description>
 </property>
 <property>
    <name>hawq_init_with_hdfs</name>
    <value>true</value>
    <description>Choose whether init cluster with hdfs</description>
 </property>
 ...
 <property>
    <name>hawq_rm_yarn_address</name>
    <value>oushum1:8032</value>
    <description>The address of YARN resource manager server.</description>
 </property>
 <property>
    <name>hawq_rm_yarn_scheduler_address</name>
    <value>oushum1:8030</value>
    <description>The address of YARN scheduler server.</description>
 </property>
 ...
 <property>
    <name>hawq_rm_yarn_app_name</name>
    <value>hawq</value>
    <description>The application name to register hawq resource manager in YARN.</description>
 </property>
 ...
 <property>
    <name>hawq_re_cgroup_hierarchy_name</name>
    <value>hawq</value>
    <description>The name of the hierarchy to accomodate CGroup directories/files for resource enforcement.For example, /sys/fs/cgroup/cpu/hawq for CPU sub-system.</description>
 </property>
 ...
</configuration>

15.OushuDB4.0版本新增Magma的单独配置和启停功能,使用magam服务时, 在oushum1,使用“hawq ssh”在所有slave节点创建node数据目录,并授予gpadmin权限

hawq ssh -f shostfile -e 'mkdir -p /data1/hawq/magma_segmentdd'
hawq ssh -f shostfile -e 'mkdir -p /data2/hawq/magma_segmentdd'

hawq ssh -f shostfile -e 'chown -R gpadmin:gpadmin /data1/hawq'
hawq ssh -f shostfile -e 'chown -R gpadmin:gpadmin /data2/hawq'

16.然后编辑配置/usr/local/hawq/etc/magma-site.xml:

<property>
    <name>nodes_file</name>
    <value>slaves</value>
    <description>The magma nodes file name at GPHOME/etc</description>
</property>
<property>
    <name>node_data_directory</name>
    <value>file:///data1/hawq/magma_segmentdd,file:///data2/hawq/magma_segmentdd</value>
    <description>The data directory for magma node</description>
</property>
<property>
    <name>node_log_directory</name>
    <value>~/hawq-data-directory/segmentdd/pg_log</value>
    <description>The log directory for magma node</description>
</property>
<property>
    <name>node_address_port</name>
    <value>6666</value>
    <description>The port magma node listening</description>
</property>
<property>
    <name>magma_range_number</name>
    <value>2</value>
</property>
<property>
    <name>magma_replica_number</name>
    <value>3</value>
</property>
<property>
    <name>magma_datadir_capacity</name>
    <value>3</value>
</property>

17.拷贝oushum1上/usr/local/hawq/etc中的配置文件到所有节点:

source /usr/local/hawq/greenplum_path.sh

hawq scp -r -f hostfile /usr/local/hawq/etc =:/usr/local/hawq

18.在oushum1,切换到gpadmin用户,创建hhostfile:

su - gpadmin
source /usr/local/hawq/greenplum_path.sh  #设置hawq环境变量
touch hhostfile

hhostfile文件记录所有OushuDB节点主机名称,内容如下:

oushum1
oushum2
oushus1
oushus2

19.使用root用户登录到每台机器,修改gpadmin用户密码:

sudo echo 'password' | sudo passwd  --stdin gpadmin

20.针对gpadmin用户交换key,并且按照提示输入相应节点的gpadmin用户密码:

su - gpadmin
source /usr/local/hawq/greenplum_path.sh  #设置hawq环境变量
hawq ssh-exkeys -f hhostfile

21.在oushum1,使用gpadmin用户权限,初始化OushuDB集群, 当提示“Continue with HAWQ init”时,输 入 Y:

hawq init cluster   //OushuDB4.0 默认不启动magma服务
hawq init cluster --with_magma   //OushuDB4.0新增,3.X版本不支持该用法

// OushuDB4.0版本新增--with_magma选项,但只有hawq init|start|stop cluster命令可以带--with_magma选项。

注意:

#在做OushuDB集群初始化的时候,需要保证在创建的/data*/hawq/目录下,masterdd和segmentdd>都是空目录,在hadoop上创建的/hawq/default_filespace确保是空目录
#另外,如果hawq init cluster失败,可以先执行下面的命令停止hawq集群,清空目录,找出>问题原因后重新初始化。

hawq stop cluster

#在OushuDB master节点,根据本次安装的配置,s使用下面的命令清空所有hawq目录,然后重建hawq子目录:

hawq ssh -f hhostfile -e 'rm -fr /data1/hawq/masterdd/*'
hawq ssh -f hhostfile -e 'rm -fr /data1/hawq/segmentdd/*'
hawq ssh -f hhostfile -e 'rm -fr /data1/hawq/magma_masterdd/*'
hawq ssh -f hhostfile -e 'rm -fr /data1/hawq/magma_segmentdd/*'
hawq ssh -f hhostfile -e 'rm -fr /data2/hawq/magma_segmentdd/*'

#在HDFS namenode节点,使用下面的命令,清空/hawq/default_filespace,如果/hawq/default_filespace中有用户数据,注意备份数据,避免造成损失:

hdfs dfs -rm -f -r /hawq/default_filespace/*

你也需要检查HDFS的参数配置是否正确,最好以gpadmin用户来检查。如果参数配置不正确的话,>虽然有时HDFS可以正常启动,但在高负载情况下HDFS会出现错误。

su - gpadmin

source /usr/local/hawq/greenplum_path.sh

hawq check -f hostfile --hadoop /usr/hdp/current/hadoop-client/ --hdfs-ha

22.检查OushuDB是否运行正常:

su - gpadmin
source /usr/local/hawq/greenplum_path.sh
psql -d postgres
select * from gp_segment_configuration;  #确定所有节点是up状态

create table t(i int);
insert into t select generate_series(1,1000);
select count(*) from t;
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值