Trafodion server --- 服务端安装

如有疑问 请联系本人  Q:327398329


准备工作:

1、因为安装trafodion2.0.1版本,如果使用CHD平台,就必须使用5.4版本。(CDH5.4安装在上一篇有介绍)

2、其他用户的sudo权限。  这个应该修改配置文件。/etc/sudoer


trafodion搭建。  


1、在http://trafodion.incubator.apache.org/download.html下载 trafodion的server和installer。  


2、将两个文件放到 linux 根目录 : /root/trafodion-installer


3、将installer,解压:   tar xvfz installer.tar.gz   


4、进入解压的文件中 ,也就是installer中


5、trafodion_install是 安装脚本, 先不执行它。   如果你的linux系统可以上网,那你就直接./trafodion_install


如果无法上网,你就在执行./trafodion_install命令之前,先看一下traf_package_setup这个脚本,这里面是一些包的安装,他需要从网络上下载然后rpm安装或者yum安装。

首先你要在系统上 使用命令  : rpm -qa | grep package_name  | wc -l                             package_name 就是下面列举的包名,每个都要使用该命令查询一下,看是否已经安装。   执行结果只要不是 0,  就代表已经安装。        (rpm -qa 是列出系统安装的所有包, grep是通过包名过滤是否安装该包, wc -l 是一共有几项结果符合前面的命令,0代表没有安装)

这里我列举这些包:(如果无法上,就先自己在网上下载rpm包,然后使用   rpm -ivh 包名   进行安装 )

 1、epel   2、pdsh  3、apr  4、apr-util   5、sqlite   6、expect   7、perl-DBD-SQLite*   8、protobuf 

 9、xerces-c   10、perl-Params-Validate    11、perl-Time-HiRes    12、gzip   13、lzo   14、lzop   15unzip

以上是集群上每个节点都要安装的package。  


注意: 如果以上包没有安装完全,就会导致后面安装trafodion出现各种错误。 务必全部安装。


6、 traf_getHadoopNodes这个脚本是获得 hadoop节点的,如果使用CDH模式安装,下面HADOOP_PATH和HADOOP_BIN_PATH,要修改成 hadoop的安装位置。

(这里为什么我要说这一点,因为我第一打开这个脚本时,我发现HADOOP_PATH的值并不是正确的,所以你有必要查看一下,是否正确。)

 if [ -d /opt/cloudera/parcels/CDH ]; then
      export HADOOP_PATH="/opt/cloudera/parcels/CDH/lib/hadoop"
      export HADOOP_BIN_PATH="/opt/cloudera/parcels/CDH/bin"
   

7、上面都完成之后,我们就开始下一步的安装了。  执行./trafodion_install。   


下面这些是我执行该命令时的一些信息,可以作为参考。

(如果安装成功了,可以切换到trafodion用户下,执行sqlci 它会进入命令模式,和mysql命令行差不多)


[root@hadoop installer]# ./trafodion_install 


******************************
 TRAFODION INSTALLATION START
******************************


***INFO: testing sudo access
***INFO: Log file located at /var/log/trafodion/trafodion_install_2016-09-23-20-01-26.log
***INFO: Config directory: /etc/trafodion
***INFO: Working directory: /usr/lib/trafodion


************************************
 Trafodion Configuration File Setup
************************************


***INFO: Please press [Enter] to select defaults.


Is this a cloud environment (Y/N), default is [N]: 
Enter trafodion password, default is [traf123]: 
Enter list of data nodes (blank separated), default [ hadoop.master hadoop.slave1 hadoop.slave2 hadoop.slave3]: 
Do you have a set of management nodes (Y/N), default is N: 
Specify location of Java 1.7.0_65 or higher (JDK), default is [/usr/java/jdk1.7.0_79]: 
Enter full path (including .tar or .tar.gz) of trafodion tar file [/root/trafodion-instarller/apache-trafodion_server-2.0.1-incubating.tar.gz]: 
Enter Backup/Restore username (can be Trafodion), default is [trafodion]: 
Specify the Hadoop distribution installed (1: Cloudera, 2: Hortonworks, 3: Other): 1
Enter Hadoop admin username, default is [admin]: 
Enter Hadoop admin password, default is [admin]: 
Enter full Hadoop external network URL:port (include 'http://' or 'https://), default is [http://192.168.226.17:7180]: 
Enter HDFS username or username running HDFS, default is [hdfs]: 
Enter HBase username or username running HBase, default is [hbase]: 
Enter HBase group, default is [hbase]: 
Enter Zookeeper username or username running Zookeeper, default is [zookeeper]: 
Enter directory to install trafodion to, default is [/home/trafodion/apache-trafodion_server-2.0.1-incubating]: 
Start Trafodion after install (Y/N), default is Y: 
Total number of client connections per cluster, default [32]: 
Enter the node of primary DcsMaster, default [hadoop.master]: 
Enable High Availability (Y/N), default is N: 
Enable simple LDAP security (Y/N), default is N: 
***INFO: Trafodion configuration setup complete
***INFO: Trafodion Configuration File Check


***INFO: Testing sudo access on node hadoop.master
***INFO: Testing sudo access on node hadoop.slave1
***INFO: Testing sudo access on node hadoop.slave2
***INFO: Testing sudo access on node hadoop.slave3
***INFO: Testing ssh on hadoop.master
***INFO: Testing ssh on hadoop.slave1
***INFO: Testing ssh on hadoop.slave2
***INFO: Testing ssh on hadoop.slave3
#!/bin/bash
#
# @@@ START COPYRIGHT @@@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#   http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.
#
# @@@ END COPYRIGHT @@@
#


# Install feature file ($MY_SQROOT/conf/install_features)
#
# This file allows specific Trafodion core builds to signal to
# the installer the presence of a new feature that requires
# configuration work during install time.
#
# This file allows a single installer to install many different
# versions of Trafodion core as opposed to having many versions
# of the installer.  This allows the installer to get additional
# features in ahead of time before the Trafodion core code 
# is available.
#
# The installer will source this file and perform additional
# configuration work based upon the mutually agreed settings
# of the various environment variables in this file.  
#
# It must be coordinated between the Trafodion core feature developer
# and installer developers as to the specifics (i.e. name & value)
# of the environment variable used.
#
# ===========================================================
# Example:
# A new feature requires installer to modify HBase settings in a 
# different way that are not compatible with previous versions of
# Trafodion core. The following is added to this file:
#
#         # support for setting blah-blah in HBase
#         export NEW_HBASE_FEATURE="1"
#
# Logic is added to the installer to test for this env var and if
# there then do the new HBase settings and if not, set the settings
# to whatever they were previously.
# ===========================================================
#


# Trafodion core only works with CDH 5.4 [and HDP 2.3 not yet]
# This env var will signal that to the installer which will
# verify the hadoop distro versions are correct as well as 
# perform some additional support for this.
export CDH_5_3_HDP_2_2_SUPPORT="N"
export HDP_2_3_SUPPORT="Y"
export CDH_5_4_SUPPORT="Y"
export APACHE_1_0_X_SUPPORT="Y"
***INFO: Getting list of all cloudera nodes
***INFO: HADOOP_PATH=/opt/cloudera/parcels/CDH/lib/hadoop
***INFO: HADOOP_BIN_PATH=/opt/cloudera/parcels/CDH/bin
***INFO: cloudera list of nodes:  hadoop.master hadoop.slave1 hadoop.slave2 hadoop.slave3
***INFO: cloudera list of HDFS nodes:  hadoop.master hadoop.slave1 hadoop.slave2 hadoop.slave3
***INFO: cloudera list of HBASE nodes:  hadoop.master hadoop.slave1 hadoop.slave2 hadoop.slave3
***INFO: Testing ssh on hadoop.master
***INFO: Testing ssh on hadoop.slave1
***INFO: Testing ssh on hadoop.slave2
***INFO: Testing ssh on hadoop.slave3
***INFO: Testing sudo access on hadoop.master
***INFO: Testing sudo access on hadoop.slave1
***INFO: Testing sudo access on hadoop.slave2
***INFO: Testing sudo access on hadoop.slave3
***INFO: Checking cloudera Version
***INFO: nameOfVersion=cdh5.4.3


******************************
 TRAFODION SETUP
******************************


***INFO: Installing required RPM packages
***INFO: Starting Trafodion Package Setup (2016-09-23-20-02-52)
***INFO: Installing required packages
***INFO: Log file located in /var/log/trafodion
***INFO: ... pdsh on node hadoop.master
***INFO: ... pdsh on node hadoop.slave1
***INFO: ... pdsh on node hadoop.slave2
***INFO: ... pdsh on node hadoop.slave3
***INFO: Checking if apr is installed ...
***INFO: Checking if apr-util is installed ...
***INFO: Checking if sqlite is installed ...
***INFO: Checking if expect is installed ...
***INFO: Checking if perl-DBD-SQLite* is installed ...
***INFO: Checking if protobuf is installed ...
***INFO: Checking if xerces-c is installed ...
***INFO: Checking if perl-Params-Validate is installed ...
***INFO: Checking if perl-Time-HiRes is installed ...
***INFO: Checking if gzip is installed ...
***INFO: Checking if lzo is installed ...
***INFO: Checking if lzop is installed ...
***INFO: Checking if unzip is installed ...
***INFO: creating sqconfig file
***INFO: Reserving DCS ports
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key


***INFO: Creating trafodion sudo access file




******************************
 TRAFODION MODS
******************************


***INFO: Cloudera installed will run traf_cloudera_mods
***INFO: copying hbase-trx-cdh5_4-*.jar to all nodes
***INFO: hbase-trx-cdh5_4-*.jar copied correctly! Huzzah.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
152  1187    0  1187    0   487  14170   5813 --:--:-- --:--:-- --:--:--  8433
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3521    0  1887  102  1634  42154  36502 --:--:-- --:--:-- --:--:-- 42886
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
199   227    0   227    0   171   3058   2304 --:--:-- --:--:-- --:--:--   767
***INFO: restarting Hadoop to pickup Trafodion transaction jar
***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
101   101    0   101    0     0   1259      0 --:--:-- --:--:-- --:--:--  1278
{ "id" : 190, "name" : "Restart", "startTime" : "2016-09-23T12:04:48.530Z", "active" : true }
***DEBUG: Cloudera command_id=190
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
130   260    0   260    0     0   8670      0 --:--:-- --:--:-- --:--:--  8965
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
102   514    0   514    0     0  24445      0 --:--:-- --:--:-- --:--:-- 25700
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
102   514    0   514    0     0  24200      0 --:--:-- --:--:-- --:--:-- 25700
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
102   514    0   514    0     0  35193      0 --:--:-- --:--:-- --:--:-- 36714
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
102   514    0   514    0     0  24778      0 --:--:-- --:--:-- --:--:-- 25700
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "active" : true,
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "active" : true
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
110   770    0   770    0     0  17686      0 --:--:-- --:--:-- --:--:-- 17906
{
  "id" : 190,
  "name" : "Restart",
  "startTime" : "2016-09-23T12:04:48.530Z",
  "endTime" : "2016-09-23T12:07:19.484Z",
  "active" : false,
  "success" : true,
  "resultMessage" : "All services successfully restarted.",
  "children" : {
    "items" : [ {
      "id" : 191,
      "name" : "Stop",
      "startTime" : "2016-09-23T12:04:48.602Z",
      "endTime" : "2016-09-23T12:05:25.375Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully stopped."
    }, {
      "id" : 221,
      "name" : "Start",
      "startTime" : "2016-09-23T12:05:25.394Z",
      "endTime" : "2016-09-23T12:07:19.484Z",
      "active" : false,
      "success" : true,
      "resultMessage" : "All services successfully started."
    } ]
  }
}***INFO: ...polling every 30 seconds until restart is completed.
***INFO: Hadoop restart completed successfully
***INFO: waiting for HDFS to exit safemode
Safe mode is OFF
***INFO: Setting HDFS ACLs for snapshot scan support
***INFO: Trafodion Mods ran successfully.


******************************
 TRAFODION CONFIGURATION
******************************


/usr/lib/trafodion/installer/..
/home/trafodion/apache-trafodion_server-2.0.1-incubating
***INFO: untarring file  to /home/trafodion/apache-trafodion_server-2.0.1-incubating
***INFO: modifying .bashrc to set Trafodion environment variables
***INFO: copying .bashrc file to all nodes
***INFO: copying sqconfig file (/home/trafodion/sqconfig) to /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts/sqconfig
***INFO: Creating /home/trafodion/apache-trafodion_server-2.0.1-incubating directory on all nodes
***INFO: Start of DCS install
***INFO: DCS Install Directory: /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1
***INFO: modifying /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/dcs-env.sh
***INFO: modifying /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/dcs-site.xml
***INFO: creating /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/conf/servers file
***INFO: End of DCS install.
***INFO: Start of REST Server install
***INFO: Rest Install Directory: /home/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1
***INFO: modifying /home/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1/conf/rest-site.xml
***INFO: End of REST Server install.
***INFO: starting sqgen
hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3


Creating directories on cluster nodes
/usr/bin/pdsh -R exec -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master ssh -q -n %h mkdir -p /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc 
/usr/bin/pdsh -R exec -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master ssh -q -n %h mkdir -p /home/trafodion/apache-trafodion_server-2.0.1-incubating/logs 
/usr/bin/pdsh -R exec -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master ssh -q -n %h mkdir -p /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp 
/usr/bin/pdsh -R exec -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master ssh -q -n %h mkdir -p /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts 


Generating SQ environment variable file: /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/ms.env


Note: Using cluster.conf format type 2.


Generating SeaMonster environment variable file: /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/seamonster.env




Generated SQ startup script file: ./gomon.cold
Generated SQ startup script file: ./gomon.warm
Generated SQ cluster config file: /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp/cluster.conf
Generated SQ Shell          file: sqshell
Generated RMS Startup       file: rmsstart
Generated RMS Stop          file: rmsstop
Generated RMS Check         file: rmscheck.sql
Generated SSMP Startup      file: ssmpstart
Generated SSMP Stop         file: ssmpstop
Generated SSCP Startup      file: sscpstart
Generated SSCP Stop         file: sscpstop




Copying the generated files to all the nodes in the cluster


Copying /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp/cluster.conf to /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp of all the nodes
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp/cluster.conf /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp


Copying /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/ms.env to /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc of all the nodes
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/ms.env   /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc 
Copying /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/traf_coprocessor.properties to /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc of all the nodes
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/traf_coprocessor.properties   /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc 


Copying /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/seamonster.env to /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc of all the nodes
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc/seamonster.env   /home/trafodion/apache-trafodion_server-2.0.1-incubating/etc 


Copying rest of the generated files to /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master sqconfig sqshell gomon.cold gomon.warm rmsstart rmsstop rmscheck.sql ssmpstart ssmpstop sscpstart sscpstop /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts
/usr/bin/pdcp -R ssh -w hadoop.master,hadoop.slave1,hadoop.slave2,hadoop.slave3 -x hadoop.master sqconfig sqconfig.db /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts 




******* Generate public/private certificates *******


 Cluster Name : hadoop
Generating Self Signed Certificate....
***********************************************************
 Certificate file :server.crt
 Private key file :server.key
 Certificate/Private key created in directory :/home/trafodion/sqcert
***********************************************************


***********************************************************
 Updating Authentication Configuration
***********************************************************
Creating folders for storing certificates


***INFO: copying /home/trafodion/sqcert directory to all nodes
***INFO: copying install to all nodes
***INFO: starting Trafodion instance
Checking orphan processes.
Removing old mpijob* files from /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp


Removing old monitor.port* files from /home/trafodion/apache-trafodion_server-2.0.1-incubating/tmp


Executing sqipcrm (output to sqipcrm.out)
Starting the SQ Environment (Executing /home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts/gomon.cold)
Background SQ Startup job (pid: 48930)


# of SQ processes: 23 .
SQ Startup script (/home/trafodion/apache-trafodion_server-2.0.1-incubating/sql/scripts/gomon.cold) ran successfully. Performing further checks...
Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.


The SQ environment is up!




Process ConfiguredActual   Down
------- ----------------   ----
DTM 4 4   
RMS 8 8   
DcsMaster 1 0    1
DcsServer 4 0    4
mxosrvr 32 0    32


Fri Sep 23 20:10:07 CST 2016
Checking if processes are up.
Checking attempt: 1; user specified max: 1. Execution time in seconds: 0.


The SQ environment is up!




Process ConfiguredActual   Down
------- ----------------   ----
DTM 4 4   
RMS 8 8   
DcsMaster 1 0    1
DcsServer 4 0    4
mxosrvr 32 0    32


Starting the DCS environment now
starting master, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-1-master-hadoop.master.out
hadoop.master: starting server, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-1-server-hadoop.master.out
hadoop.slave3: starting server, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-4-server-hadoop.slave3.out
hadoop.slave1: starting server, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-2-server-hadoop.slave1.out
hadoop.slave2: starting server, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/dcs-2.0.1/bin/../logs/dcs-trafodion-3-server-hadoop.slave2.out
Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.


The SQ environment is up!




Process ConfiguredActual   Down
------- ----------------   ----
DTM 4 4   
RMS 8 8   
DcsMaster 1 1    
DcsServer 4 4    
mxosrvr 32 0    32


Checking if processes are up.
Checking attempt: 1; user specified max: 1. Execution time in seconds: 0.


The SQ environment is up!




Process ConfiguredActual   Down
------- ----------------   ----
DTM 4 4   
RMS 8 8   
DcsMaster 1 1    
DcsServer 4 4    
mxosrvr 32 0    32


Starting the REST environment now
starting rest, logging to /home/trafodion/apache-trafodion_server-2.0.1-incubating/rest-2.0.1/bin/../logs/rest-trafodion-1-rest-hadoop.master.out






Zookeeper listen port: 2181
DcsMaster listen port: 23400


Configured Primary DcsMaster: "hadoop.master"
Active DcsMaster            : "hadoop.master"


Process ConfiguredActualDown
--------- --------------------
DcsMaster 1 1
DcsServer 4 4
mxosrvr 32 27 5




You can monitor the SQ shell log file : /home/trafodion/apache-trafodion_server-2.0.1-incubating/logs/sqmon.log




Startup time  0 hour(s) 2 minute(s) 29 second(s)
Apache Trafodion Conversational Interface 2.0.1
Copyright (c) 2015-2016 Apache Software Foundation
>>Metadata Upgrade: started


Version Check: started


Metadata Upgrade: done




*** ERROR[1393] Trafodion is not initialized on this system. Do 'initialize trafodion' to initialize it.


--- SQL operation failed with errors.
>>


End of MXCI Session


Apache Trafodion Conversational Interface 2.0.1
Copyright (c) 2015-2016 Apache Software Foundation
>>initialize trafodion


;


--- SQL operation complete.
>>


End of MXCI Session


***INFO: Installation setup completed successfully.


******************************
 TRAFODION INSTALLATION END
******************************


[root@hadoop installer]# su trafodion
[trafodion@hadoop ~]$ sqcheck
Checking if processes are up.
Checking attempt: 1; user specified max: 2. Execution time in seconds: 0.


The SQ environment is up!




Process ConfiguredActual   Down
------- ----------------   ----
DTM 4 4   
RMS 8 8   
DcsMaster 1 1    
DcsServer 4 4    
mxosrvr 32 32    





小结:当重启机器时,各服务的启动为:

1、CDH的启动, server和agent保证启动。 mysql启动。

2、trafodion的server端启动。

命令如下:

①、 /opt/cm-5.4.3/etc/init.d/cloudera-scm-server start

 /opt/cm-5.4.3/etc/init.d/cloudera-scm-agent start

service mysql start

②、su trafodion

cds

sqstart


  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 3
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值