Greenplum学习10--添加节点gpexpand ①(在每台segment加入一个节点)(生成节点配置文件)

1.首先将数据库写入配置文件 

    [ gpadmin@master  ~]$ cat .bashrc 
    # .bashrc
    source /usr/local/greenplum-db/greenplum_path.sh
    export MASTER_DATA_DIRECTORY=/data/master/gpseg-1          
    export PGDATABASE=xx                                     //写入数据库如  export PGDATABASE=xx,数据库的名字一定是小写
                                                                 //    \l 为查看数据库个数

2.新建一个文件写入要添加节点的主机信息

    [ gpadmin@master  ~]$ cat /gp/new_hosts 
    segment1
    segment2

3.执行扩容命令gpexpand -f /gp/new_hosts

    (如果没有指定数据库到配置文件 gpexpand -f /gp/new_hosts -D xx如此使用-D加入数据库
    [ gpadmin@master  ~]$ gpexpand -f /gp/new_hosts 
    Would you like to initiate a new System Expansion Yy|Nn (default=N): 你想开始一个新的系统扩展?
    >  y 
    What type of mirroring strategy would you like? 你使用什么类型的镜像策略?
     spread|grouped (default=grouped):                散布 | 分组
    >  spread
    How many new primary segments per host do you want to add? (default=0): 你想添加多少新的节点每台?
    >  1
    Enter new primary data directory 1:     新加primary节点的地址路径
    >  /data/primary
    Enter new mirror data directory 1:       新加mirror节点 的地址路
    >  /data/mirror
    Input configuration files were written to ' gpexpand_inputfile_20160727_105950' and 'None'.

4.查看新加节点的分布信息

    [ gpadmin@master ~]$  cat gpexpand_inputfile_20160727_105950 
    segment1:segment1:40001:/data/primary/gpseg2:7:2:p:41001
    segment2:segment2:50001:/data/mirror/gpseg2:10:2:m:51001
    segment2:segment2:40001:/data/primary/gpseg3:8:3:p:41001
    segment1:segment1:50001:/data/mirror/gpseg3:9:3:m:51001

5.执行 gpexpand -i gpexpand_inputfile_20160727_105950

    (如果没有指定数据库到配置文件  gpexpand -i gpexpand_inputfile_20160727_105950 -D xx如此使用-D加入数据库
    [ gpadmin@master ~]$  gpexpand -i gpexpand_inputfile_20160727_105950 
    20160727:11:05:23:003960 gpexpand:master:gpadmin-[INFO]:-rerun gpexpand
    20160727:11:05:23:003960 gpexpand:master:gpadmin-[INFO]:-*************
    20160727:11:05:23:003960 gpexpand:master:gpadmin-[INFO]:-Exiting...

6.开始进行表重分布(60h是执行周期)

    [ gpadmin@master ~]$  gpexpand -d 60:00:00
    [ gpadmin@master ~]$ psql

7.查看当前节点配置信息再按相关信息缩写(进入以下查看,确保全部up)

    xx=# SELECT * from gp_segment_configuration  ;                                                                                                    
        dbid | content | role | preferred_role | mode | status | port  | hostname | address  | replication_port | san_mounts 
    ------+---------+------+----------------+------+--------+-------+----------+----------+------------------+------------
        1 |          -1 | p        | p              | s     | u      |  5432   | master    | master   |                  | 
        2 |           0 | p        | p              | s     | u      | 40000   | segment1 | segment1 |            41000 | 
        3 |           1 | p        | p              | s     | u      | 40000   | segment2 | segment2 |            41000 | 
        4 |           0 | m        | m              | s     | u      | 50000   | segment2 | segment2 |            51000 | 
        5 |           1 | m        | m              | s     | u      | 50000   | segment1 | segment1 |            51000 | 
        6 |          -1 | m        | m              | s     | u      |  5432   | standby   | standby  |                  | 
        7 |           2 | p        | p              | s     | u      | 40001   | segment1 | segment1 |            41001 | 
       10 |           2 | m        | m              | s     | u      | 50001   | segment2 | segment2 |            51001 | 
        8 |           3 | p        | p              | s     | u      | 40001   | segment2 | segment2 |            41001 | 
        9 |           3 | m        | m              | s     | u      | 50001   | segment1 | segment1 |            51001 | 

8.随机查看一个表的分布情况

    xx=#  select gp_segment_id,count(*) from 表名 group by 1;

9.在gpexpand -i 的时候,多了个gpexpand的schema

    [gpadmin@master ~]$ gpexpand -c     --移除gpexpand schema

    Do you want to dump thegpexpand.status_detail table to file? Yy|Nn (default=Y):

    > y                                你想把gpexpand.status_detail表文件卸下?

10.有可能在gpstop的时候有下面错误:

Unable to clean shared memory (can't start new thread)// 无法清理共享内存(无法启动新线程)
重启master和所有segment即可。



  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
greenplum-db-6.2.1-rhel7-x86_64.rpm Pivotal Greenplum 6.2 Release Notes This document contains pertinent release information about Pivotal Greenplum Database 6.2 releases. For previous versions of the release notes for Greenplum Database, go to Pivotal Greenplum Database Documentation. For information about Greenplum Database end of life, see Pivotal Greenplum Database end of life policy. Pivotal Greenplum 6 software is available for download from the Pivotal Greenplum page on Pivotal Network. Pivotal Greenplum 6 is based on the open source Greenplum Database project code. Important: Pivotal Support does not provide support for open source versions of Greenplum Database. Only Pivotal Greenplum Database is supported by Pivotal Support. Release 6.2.1 Release Date: 2019-12-12 Pivotal Greenplum 6.2.1 is a minor release that includes new features and resolves several issues. New Features Greenplum Database 6.2.1 includes these new features: Greenplum Database supports materialized views. Materialized views are similar to views. A materialized view enables you to save a frequently used or complex query, then access the query results in a SELECT statement as if they were a table. Materialized views persist the query results in a table-like form. Materialized view data cannot be directly updated. To refresh the materialized view data, use the REFRESH MATERIALIZED VIEW command. See Creating and Managing Materialized Views. Note: Known Issues and Limitations describes a limitation of materialized view support in Greenplum 6.2.1. The gpinitsystem utility supports the --ignore-warnings option. The option controls the value returned by gpinitsystem when warnings or an error occurs. If you specify this option, gpinitsystem returns 0 if warnings occurred during system initialization, and returns a non-zero value if a fatal error occurs. If this option is not specified, gpinitsystem returns 1 if initialization completes with warnings, and returns value of 2 or greater if a fatal error occurs. PXF version 5.10.0 is included, which introduces several new and changed features and bug fixes. See PXF Version 5.10.0 below. PXF Version 5.10.0 PXF 5.10.0 includes the following new and changed features: PXF has improved its performance when reading a large number of files from HDFS or an object store. PXF bundles newer tomcat and jackson libraries. The PXF JDBC Connector now supports pushdown of OR and NOT logical filter operators when specified in a JDBC named query or in an external table query filter condition. PXF supports writing Avro-format data to Hadoop and object stores. Refer to Reading and Writing HDFS Avro Data for more information about this feature. PXF is now certified with Hadoop 2.x and 3.1.x and Hive Server 2.x and 3.1, and bundles new and upgraded Hadoop libraries to support these versions. PXF supports Kerberos authentication to Hive Server 2.x and 3.1.x. PXF supports per-server user impersonation configuration. PXF supports concurrent access to multiple Kerberized Hadoop clusters. In previous releases of Greenplum Database, PXF supported accessing a single Hadoop cluster secured with Kerberos, and this Hadoop cluster must have been configured as the default PXF server. PXF introduces a new template file, pxf-site.xml, to specify the Kerberos and impersonation property settings for a Hadoop or JDBC server configuration. Refer to About Kerberos and User Impersonation Configuration (pxf-site.xml) for more information about this file. PXF now supports connecting to Hadoop with a configurable Hadoop user identity. PXF previously supported only proxy access to Hadoop via the gpadmin Greenplum user. PXF version 5.10.0 deprecates the following configuration properties. Note: These property settings continue to work. The PXF_USER_IMPERSONATION, PXF_PRINCIPAL, and PXF_KEYTAB settings in the pxf-env.sh file. You can use the pxf-site.xml file to configure Kerberos and impersonation settings for your new Hadoop server configurations. The pxf.impersonation.jdbc property setting in the jdbc-site.xml file. You can use the pxf.service.user.impersonation property to configure user impersonation for a new JDBC server configuration. Note: If you have previously configured a PXF JDBC server to access Kerberos-secured Hive, you must upgrade the server definition. See Upgrading PXF in Greenplum 6.x for more information. Changed Features Greenplum Database 6.2.1 includes these changed features: Greenplum Stream Server version 1.3.1 is included in the Greenplum distribution. Resolved Issues Pivotal Greenplum 6.2.1 is a minor release that resolves these issues: 29454 - gpstart During Greenplum Database start up, the gpstart utility did not report when a segment instance failed to start. The utility always displayed 0 skipped segment starts. This issue has been resolved. gpstart output was also enhanced to provide additional warnings and summary information about the number of skipped segments. For example: [WARNING]:-********

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值