Ambari 安装bigtop 3.2.0集群部署

bigtop集群部署手册
部署环境信息
主机名 IP 系统
bigtop01 192.168.130.171 CentOS-7.9
bigtop02 192.168.130.172 CentOS-7.9
bigtop03 192.168.130.173 CentOS-7.9
bigtop04 192.168.130.174 CentOS-7.9
bigtop05 192.168.130.175 CentOS-7.9
bigtop06 192.168.130.176 CentOS-7.9

基础环境优化
[root@localhost ~]# hostnamectl set-hostname bigtop01
[root@localhost ~]# bash
[root@bigtop01 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.130.171 bigtop01
192.168.130.172 bigtop02
192.168.130.173 bigtop03
192.168.130.174 bigtop04
192.168.130.175 bigtop05
192.168.130.176 bigtop06

localectl set-locale LANG=en_US.UTF-8

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

cp: overwrite ‘/etc/localtime’? y

[root@bigtop01 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2023-09-12 13:40:46 CST; 4h 47min ago
Docs: man:firewalld(1)
Main PID: 6399 (firewalld)
CGroup: /system.slice/firewalld.service
└─6399 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid

Sep 12 13:40:45 localhost.localdomain systemd[1]: Starting firewalld - dynamic firewall daemon…
Sep 12 13:40:46 localhost.localdomain systemd[1]: Started firewalld - dynamic firewall daemon.
[root@bigtop01 ~]# systemctl stop firewalld
[root@bigtop01 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@bigtop01 ~]# setenforce 0
[root@bigtop01 ~]# echo “SELINUX=disabled” > /etc/selinux/config
[root@bigtop01 ~]# echo “SELINUXTYPE=targeted” >> /etc/selinux/config
[root@bigtop01 ~]# echo “* hard nproc 65535” > /etc/security/limits.conf
[root@bigtop01 ~]# echo “* soft nproc 65535” >> /etc/security/limits.conf
[root@bigtop01 ~]# echo “* hard nofile 65535” >> /etc/security/limits.conf
[root@bigtop01 ~]# echo “* soft nofile 65535” >> /etc/security/limits.conf

[root@bigtop01 ~]# sysctl vm.swappiness=0
vm.swappiness = 0
[root@bigtop01 ~]# echo “vm.swappiness=0” >> /etc/sysctl.conf
[root@bigtop01 ~]# echo umask 0022 >> /etc/profile
[root@bigtop01 ~]# source /etc/profile

Server端操作,设置免密登录
[root@bigtop01 ~]# ssh-keygen -f ~/.ssh/id_rsa -t rsa -N ‘’
Generating public/private rsa key pair.
Created directory ‘/root/.ssh’.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:bW2qE2AD9bbK2SJvK8Po0KnBWvehP0vGBPx9JmKhhsk root@bigtop01
The key’s randomart image is:
±–[RSA 2048]----+
| … |
| … . |
| o… o |
| . o +++ o . |
| E o.=oS = o |
|… o = =.= o |
|o.+oo X … |
|.=…+Boo… |
|o… .=*+… |
±—[SHA256]-----+
[root@bigtop01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@bigtop01
[root@bigtop01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@bigtop02
[root@bigtop01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@bigtop03
[root@bigtop01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@bigtop04
[root@bigtop01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@bigtop05
[root@bigtop01 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@bigtop06

所有节点安装
yum -y groupinstall “Infrastructure Server” --setopt=group_package_types=mandatory,default,optional

yum -y groupinstall “Development Tools” --setopt=group_package_types=mandatory,default,optional

时间同步服务部署
Bigtop01节点作为时间源,其他节点和server节点同步,实现参考https://blog.csdn.net/qq_43417559/article/details/126431023

Bigtop01节点安装数据库
yum -y install mariadb-server mysql-connector-java

[root@bigtop01 ~]# systemctl start mariadb
[root@bigtop01 ~]# systemctl status mariadb
● mariadb.service - MariaDB database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2023-09-13 14:08:24 CST; 4s ago
Process: 12851 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
Process: 12767 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
Main PID: 12850 (mysqld_safe)
CGroup: /system.slice/mariadb.service
├─12850 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
└─13015 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/m…

Sep 13 14:08:19 bigtop01 mariadb-prepare-db-dir[12767]: MySQL manual for more instructions.
Sep 13 14:08:19 bigtop01 mariadb-prepare-db-dir[12767]: Please report any problems at http://mariadb.org/jira
Sep 13 14:08:19 bigtop01 mariadb-prepare-db-dir[12767]: The latest information about MariaDB is available at http://mariadb.org/.
Sep 13 14:08:19 bigtop01 mariadb-prepare-db-dir[12767]: You can find additional information about the MySQL part at:
Sep 13 14:08:19 bigtop01 mariadb-prepare-db-dir[12767]: http://dev.mysql.com
Sep 13 14:08:19 bigtop01 mariadb-prepare-db-dir[12767]: Consider joining MariaDB’s strong and vibrant community:
Sep 13 14:08:19 bigtop01 mariadb-prepare-db-dir[12767]: https://mariadb.org/get-involved/
Sep 13 14:08:19 bigtop01 mysqld_safe[12850]: 230913 14:08:19 mysqld_safe Logging to ‘/var/log/mariadb/mariadb.log’.
Sep 13 14:08:19 bigtop01 mysqld_safe[12850]: 230913 14:08:19 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Sep 13 14:08:24 bigtop01 systemd[1]: Started MariaDB database server.
[root@bigtop01 ~]# grep ‘temporary password’ /var/log/mysqld.log
grep: /var/log/mysqld.log: No such file or directory
[root@bigtop01 ~]# grep ‘temporary password’ /var/log/m
maillog mariadb/ messages
[root@bigtop01 ~]# grep ‘temporary password’ /var/log/mariadb/mariadb.log
[root@bigtop01 ~]#
[root@bigtop01 ~]#
[root@bigtop01 ~]#
[root@bigtop01 ~]# mysql -uroot -e “set password = password(‘abcd1234’);flush privileges;”
[root@bigtop01 ~]# mysql -uroot -pabcd1234 -e “use mysql;delete from user where User=‘’;flush privileges;”

[root@bigtop01 ~]# mysql -uroot -pabcd1234 -e “create database ambari;create database hive;”
[root@bigtop01 ~]# mysql -uroot -pabcd1234 -e "use ambari; create user ‘ambari’@‘%’ identified by ‘ambari123’ ;GRANT ALL PRIVILEGES on . To ‘ambari’@‘%’ IDENTIFIED BY ‘ambari123’; FLUSH PRIVILEGES;
root@bigtop01 ~]#mysql -uroot -pabcd1234 -e “use hive; create user ‘hive’@‘%’ identified by ‘ambari123’ ;GRANT ALL PRIVILEGES on . To ‘hive’@‘%’ IDENTIFIED BY ‘ambari123’; FLUSH PRIVILEGES;”

yum资源的repo库搭建

  1. centos默认的yum源保留
  2. bigtop源构建http文件服务器

Bigtop01节点部署ambari-server服务
[root@bigtop01 yum.repos.d]# yum -y install ambari-server
[root@bigtop01 yum.repos.d]# ambari-server setup
Using python /usr/bin/python
Setup ambari-server
Checking SELinux…
SELinux status is ‘enabled’
SELinux mode is ‘permissive’
WARNING: SELinux is set to ‘permissive’ mode and temporarily disabled.
OK to continue [y/n] (y)? y
Customize user account for ambari-server daemon [y/n] (n)? y
Enter user account for ambari-server daemon (root):root
Adjusting ambari-server permissions and ownership…
Checking firewall status…
Checking JDK…
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Custom JDK

Enter choice (1): 2
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
Path to JAVA_HOME: /etc/alternatives/java_sdk_1.8.0
Validating JDK on Ambari Server…done.
Check JDK version for Ambari Server…
JDK version found: 8
Minimum JDK version is 8 for Ambari. Skipping to setup different JDK for Ambari Server.
Checking GPL software agreement…
GPL License for LZO: https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
Enable Ambari Server to download and install GPL Licensed LZO packages [y/n] (n)? y
Completing setup…
Configuring database…
Enter advanced database configuration [y/n] (n)? y
Configuring database…

Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL / MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
[7] - BDB

Enter choice (1): 3
Hostname (localhost): 192.168.130.171
Port (3306): 3306
Database name (ambari):
Username (ambari):
Enter Database Password (bigdata):
Re-enter password:
Configuring ambari database…
Should ambari use existing default jdbc /usr/share/java/mysql-connector-java.jar [y/n] (y)?
Configuring remote database connection properties…
WARNING: Before starting Ambari Server, you must run the following DDL directly from the database shell to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
Proceed with configuring remote database connection properties [y/n] (y)?
Extracting system views…
ambari-admin-2.8.0.0.0.jar

Ambari repo file doesn’t contain latest json url, skipping repoinfos modification
Adjusting ambari-server permissions and ownership…
Ambari Server ‘setup’ completed successfully.

[root@bigtop01 yum.repos.d]# mysql -uroot -pabcd1234 -e “use ambari;source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql;show tables;”
±------------------------------+
| Tables_in_ambari |
±------------------------------+
| ClusterHostMapping |
| QRTZ_BLOB_TRIGGERS |
| QRTZ_CALENDARS |
| QRTZ_CRON_TRIGGERS |
| QRTZ_FIRED_TRIGGERS |
| QRTZ_JOB_DETAILS |
| QRTZ_LOCKS |
| QRTZ_PAUSED_TRIGGER_GRPS |
| QRTZ_SCHEDULER_STATE |
| QRTZ_SIMPLE_TRIGGERS |
| QRTZ_SIMPROP_TRIGGERS |
| QRTZ_TRIGGERS |
| adminpermission |
| adminprincipal |
| adminprincipaltype |
| adminprivilege |
| adminresource |
| adminresourcetype |
| alert_current |
| alert_definition |
| alert_group |
| alert_group_target |
| alert_grouping |
| alert_history |
| alert_notice |
| alert_target |
| alert_target_states |
| ambari_configuration |
| ambari_operation_history |
| ambari_sequences |
| artifact |
| blueprint |
| blueprint_configuration |
| blueprint_setting |
| clusterconfig |
| clusters |
| clusterservices |
| clusterstate |
| confgroupclusterconfigmapping |
| configgroup |
| configgrouphostmapping |
| execution_command |
| extension |
| extensionlink |
| groups |
| host_role_command |
| host_version |
| hostcomponentdesiredstate |
| hostcomponentstate |
| hostconfigmapping |
| hostgroup |
| hostgroup_component |
| hostgroup_configuration |
| hosts |
| hoststate |
| kerberos_descriptor |
| kerberos_keytab |
| kerberos_keytab_principal |
| kerberos_principal |
| key_value_store |
| kkp_mapping_service |
| members |
| metainfo |
| mpacks |
| permission_roleauthorization |
| registries |
| remoteambaricluster |
| remoteambariclusterservice |
| repo_applicable_services |
| repo_definition |
| repo_os |
| repo_tags |
| repo_version |
| request |
| requestoperationlevel |
| requestresourcefilter |
| requestschedule |
| requestschedulebatchrequest |
| role_success_criteria |
| roleauthorization |
| servicecomponent_version |
| servicecomponentdesiredstate |
| serviceconfig |
| serviceconfighosts |
| serviceconfigmapping |
| servicedesiredstate |
| setting |
| stack |
| stage |
| topology_host_info |
| topology_host_request |
| topology_host_task |
| topology_hostgroup |
| topology_logical_request |
| topology_logical_task |
| topology_request |
| upgrade |
| upgrade_group |
| upgrade_history |
| upgrade_item |
| user_authentication |
| users |
| viewentity |
| viewinstance |
| viewinstancedata |
| viewinstanceproperty |
| viewmain |
| viewparameter |
| viewresource |
| viewurl |
| widget |
| widget_layout |
| widget_layout_user_widget |
±------------------------------+
[root@bigtop01 yum.repos.d]#

[root@bigtop01 ~]# ambari-server start

使用admin / admin登录ambari部署平台

[root@bigtop01 ~]# cat ~/.ssh/id_rsa

Admin Name : admin
Cluster Name : SirunBigdata
Total Hosts : 6 (6 new)
Repositories:
• redhat7 (BIGTOP-3.2.0):
http://192.168.103.14/bigdatarepo/
Services:
• HDFS
o DataNode : 6 hosts
o NameNode : bigtop01
o SNameNode : bigtop02
• YARN
o NodeManager : 6 hosts
o ResourceManager : bigtop02
• MapReduce2
o History Server : bigtop02
• Tez
o Clients : 6 hosts
• Hive
o Metastore : bigtop02
o HiveServer2 : bigtop03
o WebHCat Server : bigtop01
o Database : Existing MySQL / MariaDB Database
• HBase
o Master : 3 hosts
o RegionServer : 6 hosts
• ZooKeeper
o Server : 6 hosts
• Ambari Metrics
o Metrics Collector : bigtop04
o Grafana : bigtop01
• Kafka
o Broker : 6 hosts
• Spark
o History Server : bigtop01
o Thrift Server : 6 hosts
• Zeppelin
o Server : bigtop01
• Flink
o History Server : bigtop01
• Solr
o Server : bigtop01

故障问题

stderr:
Traceback (most recent call last):
File “/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py”, line 355, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/BIGTOP/3.2.0/services/SPARK/package/scripts/spark_thrift_server.py”, line 53, in start
spark_service(‘sparkthriftserver’, upgrade_type=upgrade_type, action=‘start’)
File “/var/lib/ambari-agent/cache/stacks/BIGTOP/3.2.0/services/SPARK/package/scripts/spark_service.py”, line 157, in spark_service
raise ComponentIsNotRunning(“Something goes wrong, STS connection was not created but STS process still alive. "
ComponentIsNotRunning: Something goes wrong, STS connection was not created but STS process still alive. Potential problems: Hive/YARN doesn’t work correctly or too slow. For more information check STS logs.
stdout:
2023-09-14 15:31:39,346 - Stack Feature Version Info: Cluster Stack=3.2.0, Command Stack=None, Command Version=3.2.0 -> 3.2.0
2023-09-14 15:31:39,352 - Using hadoop conf dir: /etc/hadoop/conf
2023-09-14 15:31:39,687 - Stack Feature Version Info: Cluster Stack=3.2.0, Command Stack=None, Command Version=3.2.0 -> 3.2.0
2023-09-14 15:31:39,688 - Using hadoop conf dir: /etc/hadoop/conf
2023-09-14 15:31:39,690 - Group[‘flink’] {}
2023-09-14 15:31:39,692 - Group[‘spark’] {}
2023-09-14 15:31:39,692 - Group[‘hdfs’] {}
2023-09-14 15:31:39,692 - Group[‘zeppelin’] {}
2023-09-14 15:31:39,693 - Group[‘hadoop’] {}
2023-09-14 15:31:39,693 - User[‘hive’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,695 - User[‘zookeeper’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,696 - User[‘ams’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,697 - User[‘tez’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,699 - User[‘zeppelin’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘zeppelin’, ‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,700 - User[‘flink’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘flink’, ‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,701 - User[‘spark’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘spark’, ‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,702 - User[‘ambari-qa’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,704 - User[‘solr’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,705 - User[‘kafka’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,706 - User[‘hdfs’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hdfs’, ‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,707 - User[‘yarn’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,709 - User[‘mapred’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,710 - User[‘hbase’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,712 - User[‘hcat’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:31:39,713 - File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2023-09-14 15:31:39,715 - Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0’] {‘not_if’: ‘(test $(id -u ambari-qa) -gt 1000) || (false)’}
2023-09-14 15:31:39,724 - Skipping Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0’] due to not_if
2023-09-14 15:31:39,725 - Directory[‘/tmp/hbase-hbase’] {‘owner’: ‘hbase’, ‘create_parents’: True, ‘mode’: 0775, ‘cd_access’: ‘a’}
2023-09-14 15:31:39,726 - File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2023-09-14 15:31:39,728 - File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2023-09-14 15:31:39,729 - call[‘/var/lib/ambari-agent/tmp/changeUid.sh hbase’] {}
2023-09-14 15:31:39,743 - call returned (0, ‘1013’)
2023-09-14 15:31:39,744 - Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1013’] {‘not_if’: ‘(test $(id -u hbase) -gt 1000) || (false)’}
2023-09-14 15:31:39,756 - Skipping Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1013’] due to not_if
2023-09-14 15:31:39,757 - Group[‘hdfs’] {}
2023-09-14 15:31:39,757 - User[‘hdfs’] {‘fetch_nonlocal_groups’: True, ‘groups’: [‘hdfs’, ‘hadoop’, ‘hdfs’]}
2023-09-14 15:31:39,758 - FS Type: HDFS
2023-09-14 15:31:39,758 - Directory[‘/etc/hadoop’] {‘mode’: 0755}
2023-09-14 15:31:39,783 - File[‘/etc/hadoop/conf/hadoop-env.sh’] {‘content’: InlineTemplate(…), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2023-09-14 15:31:39,784 - Directory[‘/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 01777}
2023-09-14 15:31:39,802 - Execute[(‘setenforce’, ‘0’)] {‘not_if’: ‘(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)’, ‘sudo’: True, ‘only_if’: ‘test -f /selinux/enforce’}
2023-09-14 15:31:39,817 - Skipping Execute[(‘setenforce’, ‘0’)] due to not_if
2023-09-14 15:31:39,818 - Directory[‘/var/log/hadoop’] {‘owner’: ‘root’, ‘create_parents’: True, ‘group’: ‘hadoop’, ‘mode’: 0775, ‘cd_access’: ‘a’}
2023-09-14 15:31:39,823 - Directory[‘/var/run/hadoop’] {‘owner’: ‘root’, ‘create_parents’: True, ‘group’: ‘root’, ‘cd_access’: ‘a’}
2023-09-14 15:31:39,825 - Directory[‘/var/run/hadoop/hdfs’] {‘owner’: ‘hdfs’, ‘cd_access’: ‘a’}
2023-09-14 15:31:39,826 - Directory[‘/tmp/hadoop-hdfs’] {‘owner’: ‘hdfs’, ‘create_parents’: True, ‘cd_access’: ‘a’}
2023-09-14 15:31:39,838 - File[‘/etc/hadoop/conf/commons-logging.properties’] {‘content’: Template(‘commons-logging.properties.j2’), ‘owner’: ‘hdfs’}
2023-09-14 15:31:39,843 - File[‘/etc/hadoop/conf/health_check’] {‘content’: Template(‘health_check.j2’), ‘owner’: ‘hdfs’}
2023-09-14 15:31:39,857 - File[‘/etc/hadoop/conf/log4j.properties’] {‘content’: InlineTemplate(…), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2023-09-14 15:31:39,859 - File[‘/var/lib/ambari-agent/lib/fast-hdfs-resource.jar’] {‘content’: StaticFile(‘fast-hdfs-resource.jar’), ‘mode’: 0644}
2023-09-14 15:31:39,939 - File[‘/etc/hadoop/conf/hadoop-metrics2.properties’] {‘content’: Template(‘hadoop-metrics2.properties.j2’), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2023-09-14 15:31:39,941 - File[‘/etc/hadoop/conf/task-log4j.properties’] {‘content’: StaticFile(‘task-log4j.properties’), ‘mode’: 0755}
2023-09-14 15:31:39,943 - File[‘/etc/hadoop/conf/configuration.xsl’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2023-09-14 15:31:39,954 - File[‘/etc/hadoop/conf/topology_mappings.data’] {‘owner’: ‘hdfs’, ‘content’: Template(‘topology_mappings.data.j2’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2023-09-14 15:31:39,964 - File[‘/etc/hadoop/conf/topology_script.py’] {‘content’: StaticFile(‘topology_script.py’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘mode’: 0755}
2023-09-14 15:31:39,972 - Skipping unlimited key JCE policy check and setup since the Java VM is not managed by Ambari
2023-09-14 15:31:42,720 - Using hadoop conf dir: /etc/hadoop/conf
2023-09-14 15:31:42,724 - Directory[‘/var/run/spark’] {‘owner’: ‘spark’, ‘create_parents’: True, ‘group’: ‘hadoop’, ‘mode’: 0775}
2023-09-14 15:31:42,726 - Directory[‘/var/log/spark’] {‘owner’: ‘spark’, ‘group’: ‘hadoop’, ‘create_parents’: True, ‘mode’: 0775}
2023-09-14 15:31:42,726 - Directory[‘/var/lib/spark’] {‘owner’: ‘spark’, ‘group’: ‘hadoop’, ‘create_parents’: True, ‘mode’: 0775}
2023-09-14 15:31:42,727 - Directory[‘/var/lib/spark/shs_db’] {‘owner’: ‘spark’, ‘group’: ‘hadoop’, ‘create_parents’: True, ‘mode’: 0775}
2023-09-14 15:31:42,728 - HdfsResource[‘/user/spark’] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/bigtop/3.2.0/usr/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘HDFS’, ‘default_fs’: ‘hdfs://bigtop01:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘owner’: ‘spark’, ‘hadoop_conf_dir’: ‘/etc/hadoop/conf’, ‘type’: ‘directory’, ‘action’: [‘create_on_execute’], ‘immutable_paths’: [u’/mr-history/done’, u’/warehouse/tablespace/managed/hive’, u’/warehouse/tablespace/external/hive’, u’/app-logs’, u’/tmp’], ‘mode’: 0775}
2023-09-14 15:31:42,731 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '”‘"’%{http_code}‘"’“’ -X GET -d '”‘"’‘"’“’ -H '”‘“‘Content-Length: 0’”’“’ '”‘“‘http://bigtop01:50070/webhdfs/v1/user/spark?op=GETFILESTATUS&user.name=hdfs’”’“’ 1>/tmp/tmpYO164I 2>/tmp/tmpznJHwc’‘] {‘logoutput’: None, ‘quiet’: False}
2023-09-14 15:31:42,910 - call returned (0, ‘’)
2023-09-14 15:31:42,911 - get_user_call_output returned (0, u’{“FileStatus”:{“accessTime”:0,“blockSize”:0,“childrenNum”:1,“fileId”:17147,“group”:“hdfs”,“length”:0,“modificationTime”:1694676656725,“owner”:“spark”,“pathSuffix”:”“,“permission”:“775”,“replication”:0,“storagePolicy”:0,“type”:“DIRECTORY”}}200’, u’')
2023-09-14 15:31:42,914 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '”‘"’%{http_code}‘"’“’ -X GET -d '”‘"’‘"’“’ -H '”‘“‘Content-Length: 0’”’“’ '”‘“‘http://bigtop01:50070/webhdfs/v1/user/spark?op=GETFILESTATUS&user.name=hdfs’”’“’ 1>/tmp/tmpEr3d0C 2>/tmp/tmpr9DMdK’‘] {‘logoutput’: None, ‘quiet’: False}
2023-09-14 15:31:43,097 - call returned (0, ‘’)
2023-09-14 15:31:43,098 - get_user_call_output returned (0, u’{“FileStatus”:{“accessTime”:0,“blockSize”:0,“childrenNum”:1,“fileId”:17147,“group”:“hdfs”,“length”:0,“modificationTime”:1694676656725,“owner”:“spark”,“pathSuffix”:”“,“permission”:“775”,“replication”:0,“storagePolicy”:0,“type”:“DIRECTORY”}}200’, u’‘)
2023-09-14 15:31:43,100 - HdfsResource[’/warehouse/tablespace/managed/hive’] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/bigtop/3.2.0/usr/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘HDFS’, ‘default_fs’: ‘hdfs://bigtop01:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘owner’: ‘spark’, ‘hadoop_conf_dir’: ‘/etc/hadoop/conf’, ‘type’: ‘directory’, ‘action’: [‘create_on_execute’], ‘immutable_paths’: [u’/mr-history/done’, u’/warehouse/tablespace/managed/hive’, u’/warehouse/tablespace/external/hive’, u’/app-logs’, u’/tmp’], ‘mode’: 0777}
2023-09-14 15:31:43,103 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '”‘"’%{http_code}‘"’“’ -X GET -d '”‘"’‘"’“’ -H '”‘“‘Content-Length: 0’”’“’ '”‘“‘http://bigtop01:50070/webhdfs/v1/warehouse/tablespace/managed/hive?op=GETFILESTATUS&user.name=hdfs’”’“’ 1>/tmp/tmpcprK5l 2>/tmp/tmpXK1GjG’‘] {‘logoutput’: None, ‘quiet’: False}
2023-09-14 15:31:43,282 - call returned (0, ‘’)
2023-09-14 15:31:43,282 - get_user_call_output returned (0, u’{“FileStatus”:{“accessTime”:0,“aclBit”:true,“blockSize”:0,“childrenNum”:2,“fileId”:17017,“group”:“hadoop”,“length”:0,“modificationTime”:1694676587583,“owner”:“hive”,“pathSuffix”:”",“permission”:“770”,“replication”:0,“storagePolicy”:0,“type”:“DIRECTORY”}}200’, u’‘)
2023-09-14 15:31:43,284 - Skipping the operation for not managed DFS directory /warehouse/tablespace/managed/hive since immutable_paths contains it.
2023-09-14 15:31:43,285 - HdfsResource[None] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/bigtop/3.2.0/usr/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘HDFS’, ‘default_fs’: ‘hdfs://bigtop01:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘action’: [‘execute’], ‘hadoop_conf_dir’: ‘/etc/hadoop/conf’, ‘immutable_paths’: [u’/mr-history/done’, u’/warehouse/tablespace/managed/hive’, u’/warehouse/tablespace/external/hive’, u’/app-logs’, u’/tmp’]}
2023-09-14 15:31:43,301 - Directory[‘/usr/lib/ambari-logsearch-logfeeder/conf’] {‘create_parents’: True, ‘mode’: 0755, ‘cd_access’: ‘a’}
2023-09-14 15:31:43,302 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-.json
2023-09-14 15:31:43,303 - File[‘/usr/lib/ambari-logsearch-logfeeder/conf/input.config-.json’] {‘content’: Template(‘input.config-spark.json.j2’), ‘mode’: 0644}
2023-09-14 15:31:43,305 - PropertiesFile[‘/etc/spark/conf/spark-defaults.conf’] {‘owner’: ‘spark’, ‘key_value_delimiter’: ’ ‘, ‘group’: ‘spark’, ‘mode’: 0644, ‘properties’: …}
2023-09-14 15:31:43,318 - Generating properties file: /etc/spark/conf/spark-defaults.conf
2023-09-14 15:31:43,319 - File[’/etc/spark/conf/spark-defaults.conf’] {‘owner’: ‘spark’, ‘content’: InlineTemplate(…), ‘group’: ‘spark’, ‘mode’: 0644, ‘encoding’: ‘UTF-8’}
2023-09-14 15:31:43,371 - Writing File[‘/etc/spark/conf/spark-defaults.conf’] because contents don’t match
2023-09-14 15:31:43,372 - Changing owner for /tmp/tmp1694676703.37_854 from 0 to spark
2023-09-14 15:31:43,372 - Changing group for /tmp/tmp1694676703.37_854 from 0 to spark
2023-09-14 15:31:43,372 - Moving /tmp/tmp1694676703.37_854 to /etc/spark/conf/spark-defaults.conf
2023-09-14 15:31:43,395 - File[‘/etc/spark/conf/spark-env.sh’] {‘content’: InlineTemplate(…), ‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0644}
2023-09-14 15:31:43,396 - Writing File[‘/etc/spark/conf/spark-env.sh’] because contents don’t match
2023-09-14 15:31:43,397 - Changing owner for /tmp/tmp1694676703.4_221 from 0 to spark
2023-09-14 15:31:43,397 - Changing group for /tmp/tmp1694676703.4_221 from 0 to spark
2023-09-14 15:31:43,398 - Moving /tmp/tmp1694676703.4_221 to /etc/spark/conf/spark-env.sh
2023-09-14 15:31:43,408 - File[‘/etc/spark/conf/log4j.properties’] {‘content’: …, ‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0644}
2023-09-14 15:31:43,417 - File[‘/etc/spark/conf/metrics.properties’] {‘content’: InlineTemplate(…), ‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0644}
2023-09-14 15:31:43,419 - XmlConfig[‘hive-site.xml’] {‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0644, ‘conf_dir’: ‘/etc/spark/conf’, ‘configurations’: …}
2023-09-14 15:31:43,445 - Generating config: /etc/spark/conf/hive-site.xml
2023-09-14 15:31:43,446 - File[‘/etc/spark/conf/hive-site.xml’] {‘owner’: ‘spark’, ‘content’: InlineTemplate(…), ‘group’: ‘spark’, ‘mode’: 0644, ‘encoding’: ‘UTF-8’}
2023-09-14 15:31:43,466 - File[‘/etc/spark/conf/spark-thrift-fairscheduler.xml’] {‘content’: InlineTemplate(…), ‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0755}
2023-09-14 15:31:43,468 - Execute[‘/usr/bigtop/current/spark-thriftserver/sbin/start-thriftserver.sh --properties-file /etc/spark/conf/spark-defaults.conf ‘] {‘environment’: {‘JAVA_HOME’: ‘/etc/alternatives/java_sdk_1.8.0’}, ‘not_if’: ‘ambari-sudo.sh -H -E test -f /var/run/spark/spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1.pid && ambari-sudo.sh -H -E pgrep -F /var/run/spark/spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1.pid’, ‘user’: ‘spark’}
2023-09-14 15:32:16,788 - Check connection to STS is created.
2023-09-14 15:32:16,790 - Execute[’! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:32:19,270 - Connection to STS still is not created.
2023-09-14 15:32:19,270 - Check STS process status.
2023-09-14 15:32:49,301 - Check connection to STS is created.
2023-09-14 15:32:49,302 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:32:51,884 - Connection to STS still is not created.
2023-09-14 15:32:51,884 - Check STS process status.
2023-09-14 15:33:21,914 - Check connection to STS is created.
2023-09-14 15:33:21,917 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:33:24,342 - Connection to STS still is not created.
2023-09-14 15:33:24,342 - Check STS process status.
2023-09-14 15:33:54,350 - Check connection to STS is created.
2023-09-14 15:33:54,351 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:33:56,860 - Connection to STS still is not created.
2023-09-14 15:33:56,861 - Check STS process status.
2023-09-14 15:34:26,890 - Check connection to STS is created.
2023-09-14 15:34:26,891 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:34:29,283 - Connection to STS still is not created.
2023-09-14 15:34:29,284 - Check STS process status.
2023-09-14 15:34:59,314 - Check connection to STS is created.
2023-09-14 15:34:59,316 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:35:01,997 - Connection to STS still is not created.
2023-09-14 15:35:01,997 - Check STS process status.
2023-09-14 15:35:32,027 - Check connection to STS is created.
2023-09-14 15:35:32,029 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:35:34,448 - Connection to STS still is not created.
2023-09-14 15:35:34,449 - Check STS process status.
2023-09-14 15:36:04,480 - Check connection to STS is created.
2023-09-14 15:36:04,481 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:36:06,917 - Connection to STS still is not created.
2023-09-14 15:36:06,917 - Check STS process status.
2023-09-14 15:36:36,946 - Check connection to STS is created.
2023-09-14 15:36:36,948 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:36:39,591 - Connection to STS still is not created.
2023-09-14 15:36:39,592 - Check STS process status.
2023-09-14 15:37:09,623 - Check connection to STS is created.
2023-09-14 15:37:09,625 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:37:11,985 - Connection to STS still is not created.
2023-09-14 15:37:11,986 - Check STS process status.
2023-09-14 15:37:42,013 - Check connection to STS is created.
2023-09-14 15:37:42,015 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:37:44,541 - Connection to STS still is not created.
2023-09-14 15:37:44,542 - Check STS process status.
2023-09-14 15:38:14,567 - Check connection to STS is created.
2023-09-14 15:38:14,569 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:38:17,285 - Connection to STS still is not created.
2023-09-14 15:38:17,285 - Check STS process status.
2023-09-14 15:38:47,312 - Check connection to STS is created.
2023-09-14 15:38:47,314 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:38:49,617 - Connection to STS still is not created.
2023-09-14 15:38:49,618 - Check STS process status.
2023-09-14 15:39:19,648 - Check connection to STS is created.
2023-09-14 15:39:19,650 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:39:22,076 - Connection to STS still is not created.
2023-09-14 15:39:22,076 - Check STS process status.
2023-09-14 15:39:52,107 - Check connection to STS is created.
2023-09-14 15:39:52,109 - Execute[‘! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop01:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:39:54,626 - Connection to STS still is not created.
2023-09-14 15:39:54,627 - Check STS process status.

Command failed after 1 tries

stderr:
Traceback (most recent call last):
File “/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py”, line 355, in execute
method(env)
File “/var/lib/ambari-agent/cache/stacks/BIGTOP/3.2.0/services/SPARK/package/scripts/spark_thrift_server.py”, line 53, in start
spark_service(‘sparkthriftserver’, upgrade_type=upgrade_type, action=‘start’)
File “/var/lib/ambari-agent/cache/stacks/BIGTOP/3.2.0/services/SPARK/package/scripts/spark_service.py”, line 152, in spark_service
check_process_status(status_params.spark_thrift_server_pid_file)
File “/usr/lib/ambari-agent/lib/resource_management/libraries/functions/check_process_status.py”, line 61, in check_process_status
raise ComponentIsNotRunning()
ComponentIsNotRunning
stdout:
2023-09-14 15:30:46,152 - Stack Feature Version Info: Cluster Stack=3.2.0, Command Stack=None, Command Version=3.2.0 -> 3.2.0
2023-09-14 15:30:46,163 - Using hadoop conf dir: /etc/hadoop/conf
2023-09-14 15:30:46,520 - Stack Feature Version Info: Cluster Stack=3.2.0, Command Stack=None, Command Version=3.2.0 -> 3.2.0
2023-09-14 15:30:46,521 - Using hadoop conf dir: /etc/hadoop/conf
2023-09-14 15:30:46,526 - Group[‘flink’] {}
2023-09-14 15:30:46,529 - Group[‘spark’] {}
2023-09-14 15:30:46,529 - Group[‘hdfs’] {}
2023-09-14 15:30:46,530 - Group[‘zeppelin’] {}
2023-09-14 15:30:46,530 - Group[‘hadoop’] {}
2023-09-14 15:30:46,532 - User[‘hive’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,535 - User[‘zookeeper’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,537 - User[‘ams’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,540 - User[‘tez’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,542 - User[‘zeppelin’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘zeppelin’, ‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,545 - User[‘flink’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘flink’, ‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,548 - User[‘spark’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘spark’, ‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,550 - User[‘ambari-qa’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,553 - User[‘solr’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,555 - User[‘kafka’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,558 - User[‘hdfs’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hdfs’, ‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,561 - User[‘yarn’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,563 - User[‘mapred’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,565 - User[‘hbase’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,568 - User[‘hcat’] {‘gid’: ‘hadoop’, ‘fetch_nonlocal_groups’: True, ‘groups’: [‘hadoop’], ‘uid’: None}
2023-09-14 15:30:46,570 - File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2023-09-14 15:30:46,574 - Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0’] {‘not_if’: ‘(test $(id -u ambari-qa) -gt 1000) || (false)’}
2023-09-14 15:30:46,588 - Skipping Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0’] due to not_if
2023-09-14 15:30:46,589 - Directory[‘/tmp/hbase-hbase’] {‘owner’: ‘hbase’, ‘create_parents’: True, ‘mode’: 0775, ‘cd_access’: ‘a’}
2023-09-14 15:30:46,591 - File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2023-09-14 15:30:46,594 - File[‘/var/lib/ambari-agent/tmp/changeUid.sh’] {‘content’: StaticFile(‘changeToSecureUid.sh’), ‘mode’: 0555}
2023-09-14 15:30:46,596 - call[‘/var/lib/ambari-agent/tmp/changeUid.sh hbase’] {}
2023-09-14 15:30:46,617 - call returned (0, ‘1013’)
2023-09-14 15:30:46,619 - Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1013’] {‘not_if’: ‘(test $(id -u hbase) -gt 1000) || (false)’}
2023-09-14 15:30:46,631 - Skipping Execute[‘/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1013’] due to not_if
2023-09-14 15:30:46,632 - Group[‘hdfs’] {}
2023-09-14 15:30:46,633 - User[‘hdfs’] {‘fetch_nonlocal_groups’: True, ‘groups’: [‘hdfs’, ‘hadoop’, ‘hdfs’]}
2023-09-14 15:30:46,635 - FS Type: HDFS
2023-09-14 15:30:46,635 - Directory[‘/etc/hadoop’] {‘mode’: 0755}
2023-09-14 15:30:46,682 - File[‘/etc/hadoop/conf/hadoop-env.sh’] {‘content’: InlineTemplate(…), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2023-09-14 15:30:46,684 - Directory[‘/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 01777}
2023-09-14 15:30:46,717 - Execute[(‘setenforce’, ‘0’)] {‘not_if’: ‘(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)’, ‘sudo’: True, ‘only_if’: ‘test -f /selinux/enforce’}
2023-09-14 15:30:46,735 - Skipping Execute[(‘setenforce’, ‘0’)] due to not_if
2023-09-14 15:30:46,736 - Directory[‘/var/log/hadoop’] {‘owner’: ‘root’, ‘create_parents’: True, ‘group’: ‘hadoop’, ‘mode’: 0775, ‘cd_access’: ‘a’}
2023-09-14 15:30:46,742 - Directory[‘/var/run/hadoop’] {‘owner’: ‘root’, ‘create_parents’: True, ‘group’: ‘root’, ‘cd_access’: ‘a’}
2023-09-14 15:30:46,743 - Directory[‘/var/run/hadoop/hdfs’] {‘owner’: ‘hdfs’, ‘cd_access’: ‘a’}
2023-09-14 15:30:46,744 - Directory[‘/tmp/hadoop-hdfs’] {‘owner’: ‘hdfs’, ‘create_parents’: True, ‘cd_access’: ‘a’}
2023-09-14 15:30:46,755 - File[‘/etc/hadoop/conf/commons-logging.properties’] {‘content’: Template(‘commons-logging.properties.j2’), ‘owner’: ‘hdfs’}
2023-09-14 15:30:46,760 - File[‘/etc/hadoop/conf/health_check’] {‘content’: Template(‘health_check.j2’), ‘owner’: ‘hdfs’}
2023-09-14 15:30:46,776 - File[‘/etc/hadoop/conf/log4j.properties’] {‘content’: InlineTemplate(…), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2023-09-14 15:30:46,778 - File[‘/var/lib/ambari-agent/lib/fast-hdfs-resource.jar’] {‘content’: StaticFile(‘fast-hdfs-resource.jar’), ‘mode’: 0644}
2023-09-14 15:30:46,853 - File[‘/etc/hadoop/conf/hadoop-metrics2.properties’] {‘content’: Template(‘hadoop-metrics2.properties.j2’), ‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2023-09-14 15:30:46,855 - File[‘/etc/hadoop/conf/task-log4j.properties’] {‘content’: StaticFile(‘task-log4j.properties’), ‘mode’: 0755}
2023-09-14 15:30:46,857 - File[‘/etc/hadoop/conf/configuration.xsl’] {‘owner’: ‘hdfs’, ‘group’: ‘hadoop’}
2023-09-14 15:30:46,869 - File[‘/etc/hadoop/conf/topology_mappings.data’] {‘owner’: ‘hdfs’, ‘content’: Template(‘topology_mappings.data.j2’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘group’: ‘hadoop’, ‘mode’: 0644}
2023-09-14 15:30:46,880 - File[‘/etc/hadoop/conf/topology_script.py’] {‘content’: StaticFile(‘topology_script.py’), ‘only_if’: ‘test -d /etc/hadoop/conf’, ‘mode’: 0755}
2023-09-14 15:30:46,890 - Skipping unlimited key JCE policy check and setup since the Java VM is not managed by Ambari
2023-09-14 15:30:47,427 - Using hadoop conf dir: /etc/hadoop/conf
2023-09-14 15:30:47,432 - Directory[‘/var/run/spark’] {‘owner’: ‘spark’, ‘create_parents’: True, ‘group’: ‘hadoop’, ‘mode’: 0775}
2023-09-14 15:30:47,433 - Directory[‘/var/log/spark’] {‘owner’: ‘spark’, ‘group’: ‘hadoop’, ‘create_parents’: True, ‘mode’: 0775}
2023-09-14 15:30:47,434 - Directory[‘/var/lib/spark’] {‘owner’: ‘spark’, ‘group’: ‘hadoop’, ‘create_parents’: True, ‘mode’: 0775}
2023-09-14 15:30:47,434 - Creating directory Directory[‘/var/lib/spark’] since it doesn’t exist.
2023-09-14 15:30:47,434 - Changing owner for /var/lib/spark from 0 to spark
2023-09-14 15:30:47,434 - Changing group for /var/lib/spark from 0 to hadoop
2023-09-14 15:30:47,434 - Changing permission for /var/lib/spark from 755 to 775
2023-09-14 15:30:47,435 - Directory[‘/var/lib/spark/shs_db’] {‘owner’: ‘spark’, ‘group’: ‘hadoop’, ‘create_parents’: True, ‘mode’: 0775}
2023-09-14 15:30:47,435 - Creating directory Directory[‘/var/lib/spark/shs_db’] since it doesn’t exist.
2023-09-14 15:30:47,435 - Changing owner for /var/lib/spark/shs_db from 0 to spark
2023-09-14 15:30:47,435 - Changing group for /var/lib/spark/shs_db from 0 to hadoop
2023-09-14 15:30:47,436 - Changing permission for /var/lib/spark/shs_db from 755 to 775
2023-09-14 15:30:47,436 - HdfsResource[‘/user/spark’] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/bigtop/3.2.0/usr/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘HDFS’, ‘default_fs’: ‘hdfs://bigtop01:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘owner’: ‘spark’, ‘hadoop_conf_dir’: ‘/etc/hadoop/conf’, ‘type’: ‘directory’, ‘action’: [‘create_on_execute’], ‘immutable_paths’: [u’/mr-history/done’, u’/warehouse/tablespace/managed/hive’, u’/warehouse/tablespace/external/hive’, u’/app-logs’, u’/tmp’], ‘mode’: 0775}
2023-09-14 15:30:47,439 - call[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘"’"’%{http_code}’“'”’ -X GET -d ‘"’“‘’”‘"’ -H ‘"’“‘Content-Length: 0’”‘"’ ‘"’“‘http://bigtop01:50070/webhdfs/v1/user/spark?op=GETFILESTATUS&user.name=hdfs’”‘"’ 1>/tmp/tmpNHcTgE 2>/tmp/tmpc0N5f2’‘] {‘logoutput’: None, ‘quiet’: False}
2023-09-14 15:30:47,586 - call returned (0, ‘’)
2023-09-14 15:30:47,587 - get_user_call_output returned (0, u’{“FileStatus”:{“accessTime”:0,“blockSize”:0,“childrenNum”:0,“fileId”:17147,“group”:“hdfs”,“length”:0,“modificationTime”:1694676647530,“owner”:“hdfs”,“pathSuffix”:“”,“permission”:“755”,“replication”:0,“storagePolicy”:0,“type”:“DIRECTORY”}}200’, u’‘)
2023-09-14 15:30:47,590 - call[‘ambari-sudo.sh su hdfs -l -s /bin/bash -c ‘curl -sS -L -w ‘"’"’%{http_code}’"’“’ -X GET -d '”‘"’‘"’“’ -H '”‘“‘Content-Length: 0’”’“’ '”‘“‘http://bigtop01:50070/webhdfs/v1/user/spark?op=GETFILESTATUS&user.name=hdfs’”’“’ 1>/tmp/tmpErRm1R 2>/tmp/tmpwOXyaP’‘] {‘logoutput’: None, ‘quiet’: False}
2023-09-14 15:30:47,758 - call returned (0, ‘’)
2023-09-14 15:30:47,759 - get_user_call_output returned (0, u’{“FileStatus”:{“accessTime”:0,“blockSize”:0,“childrenNum”:0,“fileId”:17147,“group”:“hdfs”,“length”:0,“modificationTime”:1694676647530,“owner”:“hdfs”,“pathSuffix”:”“,“permission”:“755”,“replication”:0,“storagePolicy”:0,“type”:“DIRECTORY”}}200’, u’')
2023-09-14 15:30:47,762 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '”‘"’%{http_code}‘"’“’ -X PUT -d '”‘"’‘"’“’ -H '”‘“‘Content-Length: 0’”’“’ '”‘“‘http://bigtop01:50070/webhdfs/v1/user/spark?op=SETPERMISSION&user.name=hdfs&permission=775’”’“’ 1>/tmp/tmpnZq0sw 2>/tmp/tmplQsFHN’‘] {‘logoutput’: None, ‘quiet’: False}
2023-09-14 15:30:47,937 - call returned (0, ‘’)
2023-09-14 15:30:47,938 - get_user_call_output returned (0, u’200’, u’')
2023-09-14 15:30:47,941 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '”‘"’%{http_code}‘"’“’ -X PUT -d '”‘"’‘"’“’ -H '”‘“‘Content-Length: 0’”’“’ '”‘“‘http://bigtop01:50070/webhdfs/v1/user/spark?op=SETOWNER&owner=spark&group=&user.name=hdfs’”’“’ 1>/tmp/tmpSGMaKA 2>/tmp/tmpSVUrHS’‘] {‘logoutput’: None, ‘quiet’: False}
2023-09-14 15:30:48,119 - call returned (0, ‘’)
2023-09-14 15:30:48,119 - get_user_call_output returned (0, u’200’, u’‘)
2023-09-14 15:30:48,122 - HdfsResource[’/warehouse/tablespace/managed/hive’] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/bigtop/3.2.0/usr/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘HDFS’, ‘default_fs’: ‘hdfs://bigtop01:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘owner’: ‘spark’, ‘hadoop_conf_dir’: ‘/etc/hadoop/conf’, ‘type’: ‘directory’, ‘action’: [‘create_on_execute’], ‘immutable_paths’: [u’/mr-history/done’, u’/warehouse/tablespace/managed/hive’, u’/warehouse/tablespace/external/hive’, u’/app-logs’, u’/tmp’], ‘mode’: 0777}
2023-09-14 15:30:48,124 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '”‘"’%{http_code}‘"’“’ -X GET -d '”‘"’‘"’“’ -H '”‘“‘Content-Length: 0’”’“’ '”‘“‘http://bigtop01:50070/webhdfs/v1/warehouse/tablespace/managed/hive?op=GETFILESTATUS&user.name=hdfs’”’“’ 1>/tmp/tmpbFjoMw 2>/tmp/tmpUwA9s3’‘] {‘logoutput’: None, ‘quiet’: False}
2023-09-14 15:30:48,302 - call returned (0, ‘’)
2023-09-14 15:30:48,302 - get_user_call_output returned (0, u’{“FileStatus”:{“accessTime”:0,“aclBit”:true,“blockSize”:0,“childrenNum”:2,“fileId”:17017,“group”:“hadoop”,“length”:0,“modificationTime”:1694676587583,“owner”:“hive”,“pathSuffix”:”",“permission”:“770”,“replication”:0,“storagePolicy”:0,“type”:“DIRECTORY”}}200’, u’‘)
2023-09-14 15:30:48,304 - Skipping the operation for not managed DFS directory /warehouse/tablespace/managed/hive since immutable_paths contains it.
2023-09-14 15:30:48,305 - HdfsResource[None] {‘security_enabled’: False, ‘hadoop_bin_dir’: ‘/usr/bigtop/3.2.0/usr/bin’, ‘keytab’: [EMPTY], ‘dfs_type’: ‘HDFS’, ‘default_fs’: ‘hdfs://bigtop01:8020’, ‘hdfs_resource_ignore_file’: ‘/var/lib/ambari-agent/data/.hdfs_resource_ignore’, ‘hdfs_site’: …, ‘kinit_path_local’: ‘kinit’, ‘principal_name’: [EMPTY], ‘user’: ‘hdfs’, ‘action’: [‘execute’], ‘hadoop_conf_dir’: ‘/etc/hadoop/conf’, ‘immutable_paths’: [u’/mr-history/done’, u’/warehouse/tablespace/managed/hive’, u’/warehouse/tablespace/external/hive’, u’/app-logs’, u’/tmp’]}
2023-09-14 15:30:48,321 - Directory[‘/usr/lib/ambari-logsearch-logfeeder/conf’] {‘create_parents’: True, ‘mode’: 0755, ‘cd_access’: ‘a’}
2023-09-14 15:30:48,322 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-.json
2023-09-14 15:30:48,323 - File[‘/usr/lib/ambari-logsearch-logfeeder/conf/input.config-.json’] {‘content’: Template(‘input.config-spark.json.j2’), ‘mode’: 0644}
2023-09-14 15:30:48,325 - Writing File[‘/usr/lib/ambari-logsearch-logfeeder/conf/input.config-.json’] because it doesn’t exist
2023-09-14 15:30:48,326 - Moving /tmp/tmp1694676648.33_363 to /usr/lib/ambari-logsearch-logfeeder/conf/input.config-.json
2023-09-14 15:30:48,337 - PropertiesFile[‘/etc/spark/conf/spark-defaults.conf’] {‘owner’: ‘spark’, ‘key_value_delimiter’: ’ ‘, ‘group’: ‘spark’, ‘mode’: 0644, ‘properties’: …}
2023-09-14 15:30:48,354 - Generating properties file: /etc/spark/conf/spark-defaults.conf
2023-09-14 15:30:48,354 - File[’/etc/spark/conf/spark-defaults.conf’] {‘owner’: ‘spark’, ‘content’: InlineTemplate(…), ‘group’: ‘spark’, ‘mode’: 0644, ‘encoding’: ‘UTF-8’}
2023-09-14 15:30:48,408 - Writing File[‘/etc/spark/conf/spark-defaults.conf’] because contents don’t match
2023-09-14 15:30:48,409 - Changing owner for /tmp/tmp1694676648.41_646 from 0 to spark
2023-09-14 15:30:48,409 - Changing group for /tmp/tmp1694676648.41_646 from 0 to spark
2023-09-14 15:30:48,409 - Moving /tmp/tmp1694676648.41_646 to /etc/spark/conf/spark-defaults.conf
2023-09-14 15:30:48,432 - File[‘/etc/spark/conf/spark-env.sh’] {‘content’: InlineTemplate(…), ‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0644}
2023-09-14 15:30:48,433 - Writing File[‘/etc/spark/conf/spark-env.sh’] because contents don’t match
2023-09-14 15:30:48,434 - Changing owner for /tmp/tmp1694676648.43_992 from 0 to spark
2023-09-14 15:30:48,434 - Changing group for /tmp/tmp1694676648.43_992 from 0 to spark
2023-09-14 15:30:48,434 - Moving /tmp/tmp1694676648.43_992 to /etc/spark/conf/spark-env.sh
2023-09-14 15:30:48,444 - File[‘/etc/spark/conf/log4j.properties’] {‘content’: …, ‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0644}
2023-09-14 15:30:48,452 - File[‘/etc/spark/conf/metrics.properties’] {‘content’: InlineTemplate(…), ‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0644}
2023-09-14 15:30:48,454 - XmlConfig[‘hive-site.xml’] {‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0644, ‘conf_dir’: ‘/etc/spark/conf’, ‘configurations’: …}
2023-09-14 15:30:48,481 - Generating config: /etc/spark/conf/hive-site.xml
2023-09-14 15:30:48,482 - File[‘/etc/spark/conf/hive-site.xml’] {‘owner’: ‘spark’, ‘content’: InlineTemplate(…), ‘group’: ‘spark’, ‘mode’: 0644, ‘encoding’: ‘UTF-8’}
2023-09-14 15:30:48,501 - File[‘/etc/spark/conf/spark-thrift-fairscheduler.xml’] {‘content’: InlineTemplate(…), ‘owner’: ‘spark’, ‘group’: ‘spark’, ‘mode’: 0755}
2023-09-14 15:30:48,504 - Execute[‘/usr/bigtop/current/spark-thriftserver/sbin/start-thriftserver.sh --properties-file /etc/spark/conf/spark-defaults.conf ‘] {‘environment’: {‘JAVA_HOME’: ‘/etc/alternatives/java_sdk_1.8.0’}, ‘not_if’: ‘ambari-sudo.sh -H -E test -f /var/run/spark/spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1.pid && ambari-sudo.sh -H -E pgrep -F /var/run/spark/spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1.pid’, ‘user’: ‘spark’}
2023-09-14 15:31:21,840 - Check connection to STS is created.
2023-09-14 15:31:21,842 - Execute[’! /usr/bigtop/current/spark-thriftserver/bin/beeline -u ‘jdbc:hive2://bigtop06:10016/default;transportMode=binary’ -e ‘’ 2>&1| awk ‘{print}’|grep -i -e ‘Connection refused’ -e ‘Invalid URL’ -e ‘Error: Could not open’’] {‘path’: [‘/usr/bigtop/current/spark-thriftserver/bin/beeline’], ‘user’: ‘spark’, ‘timeout’: 60.0}
2023-09-14 15:31:24,510 - Connection to STS still is not created.
2023-09-14 15:31:24,510 - Check STS process status.
2023-09-14 15:31:24,511 - Process with pid 31026 is not running. Stale pid file at /var/run/spark/spark-spark-org.apache.spark.sql.hive.thriftserver.HiveThriftServer2-1.pid

Command failed after 1 tries

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值