各种服务报错

nginx起不来

# /apps/nginx/sbin/nginx  -s stop
nginx: [error] invalid PID number "" in "/apps/nginx/logs/nginx.pid"
重新加载一下文件
# /apps/nginx/sbin/nginx -c /apps/nginx/conf/nginx.conf




172.31.7.101-centos7.8-server
-------------------------------
# ./configure  --prefix=/apps/tengine --user=nginx --group=nginx --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-pcre --with-file-aio   --with-openssl=/usr/local/src/openssl-1.1.1d 

出问题:
./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre=<path> option.

解决办法
# yum -y install pcre-devel



查看出现的问题
---------------------
pull failed  镜像下载错误
[root@k8s-master01 pod]# kubectl describe pod nginx   查看pod具体信息
      default-secheduler(不履行调度器的调度)
      kubelet k8s-node02
pull failed  for registry.access.redhat.com/rhel7/po-infratructure:latest

 nginx被调度的节点查询
 [root@k8s-master01 pod]# kubectl get pod nginx -o wide   #显示调度到哪个节点
nginx  0/1  CotainerCreating   0  none  k8s-node02

mysql起不来

mysql 报 error while loading shared libraries: libtinfo.so.5 解决办法
MySQL 我采用的是 Linux- Generic 包安装,其中详细略过不表。一顿操作之后,终于到将 mysql 服务启动。但是到了连接服务的时候却报错了。

mysql: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory

解决办法:
sudo ln -s /usr/lib64/libtinfo.so.6.1 /usr/lib64/libtinfo.so.5

mysql主从启动不了


keepalivedping不同

keepalived的vip无法ping通的问题
问题1:
keepalived.conf中vip配置好后,通过ip addr可以看到vip已经顺利挂载,但是无法ping通,并且防火墙都已关闭
解决办法:
原因是keepalived.conf配置中默认vrrp_strict打开了,需要把它注释掉。重启keepalived即可ping通

原文链接:https://blog.csdn.net/weixin_43279032/article/details/82987689

keepalived的vip无法ping通的问题
问题2
# systemctl status keepalived
VRRP_Instance(vi_1) ignoring received advertisment
解决办法:
# vim /etc/keepalived/keepalived.conf
 virtual_router_id 51

Haproxy起不来

USE_OPENSSL=1 开启https
USE_SYSTEMD=1 指定为systemd模式,否则不能通过systemd进行启动,报错信息

# tail -f  /var/log/messages 
master-worker mode with systemd support (-Ws) requested, but not compiled. Use master-worker mode (-W) if you are not using Type=notify in your unit file or
recompile with USE_SYSTEMD=1.


Haproxy启动故障:Starting proxy:cannot bind socke
Starting proxy websrv: cannot bind socket [0.0.0.0:80]

查看netstat -ntpl
如果有80端口,说明80被占用了,只需要找到程序关闭即可,一般是apache的进程

解决办法:
# systemctl stop httpd
# tail -f /var/log/messages
Apr 12 00:31:51 server2 haproxy: [ALERT] 101/003151 (7074) : Starting proxy web_port: cannot bind socket


172.31.7.204
-----------------
启动报错
# systemctl restart haproxy
Job for haproxy.service failed because the control process exited with error code.
See "systemctl status haproxy.service" and "journalctl -xe" for details.
# systemctl status haproxy.service 
 haproxy.service: Main process exited, code=exited, status=1/FAILURE
Apr 25 16:50:06 k8s-ha1 systemd[1]: haproxy.service: Failed with result 'exit-code'.
Apr 25 16:50:06 k8s-ha1 systemd[1]: Failed to start HAProxy Load Balancer.

解决办法: 
配置文件代码出先问题,修改一下格式
# vim /etc/haproxy/haproxy.cfg 
listen  k8s-apiserver-6443
 bind 172.31.7.188:6443
 mode tcp
 balance source
 server 172.31.7.201  172.31.7.201:6443 check inter 3s  fall 3 rise 5    
# server 172.31.7.202  172.31.7.202:6443 check inter 3s  fall 3 rise 5
# server 172.31.7.203  172.31.7.203:6443 check inter 3s  fall 3 rise 5


docker起不来

172.31.7.15-ubuntu1804
-----------------------
# tail -f  /var/log/syslog

Apr 13 11:34:43 ubuntu1804 systemd[1]: Starting Docker Socket for the API.
Apr 13 11:34:43 ubuntu1804 systemd[2029]: docker.socket: Failed to resolve group docker: No such process
Apr 13 11:34:43 ubuntu1804 systemd[1]: docker.socket: Control process exited, code=exited status=216
Apr 13 11:34:43 ubuntu1804 systemd[1]: docker.socket: Failed with result 'exit-code'.
Apr 13 11:34:43 ubuntu1804 systemd[1]: Failed to listen on Docker Socket for the API.
Apr 13 11:34:43 ubuntu1804 systemd[1]: Dependency failed for Docker Application Container Engine.
Apr 13 11:34:43 ubuntu1804 systemd[1]: docker.service: Job docker.service/start failed with result 'dependency'

解决办法
# groupadd docker


要先删除容器,再删镜像
---------------------
# dr rmi   49f356fa4513 
Error response from daemon: conflict: unable to delete 49f356fa4513 (must be forced) - image is being used by stopped container a282fbb52b97
查看容器ID
# dr rmi alpine
强制删除容器
# dr rm -fv 09cc22a4d01e
强制删除镜像
# dr rmi -f   alpine:latest

说命令有重复,修改nginx配置文件注释掉daemon on 
--------------------------------------------
# dr run -it --rm -p 8080:80  harbor.jackie.com/m43/nginx-web:1.16.2  
nginx: [emerg] "daemon" directive is duplicate in /apps/nginx/conf/nginx.conf:11
解决办法
# vim  /opt/dockerfile/web/nginx/all-in-one/nginx.conf 
# daemon on          注释此行


启动不了容器内服务,没找到文件
------------------------------
# dr run -it --rm  harbor.jackie.com/tomcat-app1:app1
standard_init_linux.go:211: exec user process caused "no such file or directory"

解决办法:
不要在window上写完脚本文件传到linux中编码不一样


普通用户开启服务失败
------------------
# su - www -c  "/apps/tomcat/bin/catlina.sh  start"
-bash: /apps/tomcat/bin/catlina.sh: No such file or directory

解决办法: 
查看容器内的/etc/profile文件的环境变量,环境变量设置错误



docker run 的参数大概有几十个。看的眼花缭乱,而且,网友还是,他 Copy 的别人的运行命令,为什么他不能执行。这可能还和平台有关,当然复制过来的内容,编码格式可能也不一样。
docker run --name jms_all \
-v /opt/jumpserver:/opt/jumpserver/data/media  \
-p 80:80 \
-p 2222:2222 \
-e SECRET_KEY=AYCm2PaZT6zTUK4Di2ZjrC0eccT8B1ZaY85WkRJqUHWx8p86Bm \
-e BOOTSTRAP_TOKEN=ua6pKscJWiGsihLn \
-e DB_HOST=172.31.7.16 \
-e DB_PORT=3306 \
-e DB_USER='jumpserver' \
-e DB_PASSWORD="1"  \                   #修改此行编码错误
-e DB_NAME=jumpserver  \
-e REDIS_HOST=172.31.7.16 \
-e REDIS_PORT=6379  \
-e REDIS_PASSWORD= \
jumpserver/jms_all:1.5.9

解决办法 
-e DB_PASSWORD="1"  \   手动输入一次

mysql-client安装不了

# dpkg -i mysql-client_5.7.33-1ubuntu18.04_amd64.deb Selecting previously unselected package mysql-client.
(Reading database ... 81276 files and directories currently installed.)
Preparing to unpack mysql-client_5.7.33-1ubuntu18.04_amd64.deb ...
Unpacking mysql-client (5.7.33-1ubuntu18.04) ...
dpkg: dependency problems prevent configuration of mysql-client:
 mysql-client depends on mysql-community-client (= 5.7.33-1ubuntu18.04); however:
  Package mysql-community-client is not installed.
dpkg: error processing package mysql-client (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 mysql-client


解决办法:
# apt-get -f install 
# apt-get install mysql-client

dashboard不显示内容

replicasets.apps is forbidden: User "system:anonymous" cannot list resource "replicasets" in API group "apps" in the namespace "system"

vim admin-user.yaml   
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: default            #此处设置为默认资源空间

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user 
  namespace: default               #此处设置为默认资源空间
  
  
 
1.添加serviceaccount账户,设置并使其可登陆

apiVersion: v1
kind: ServiceAccount
metadata:
  name: aks-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: aks-dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: aks-dashboard-admin
  namespace: kube-system
  
  
2.创建完全管理权限
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-head
  labels:
    k8s-app: kubernetes-dashboard-head
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-head
  namespace: kube-system
访问:
https://172.31.7.201:32002
https://172.31.7.207:32002
https://172.31.7.208:32002

Redis写入不了数据

redis 集群 CLUSTERDOWN Hash slot not served 报错解决思路
查看日志
# tail -f  /var/log/syslog
备注: 我这里是redis 3.2 版本 ,使用 redis-trib.rb check 进行检测

       如果是 redis 3 版本以上 请使用 redis-cli --cluster check 命令

解决办法:
参考: https://www.fxkjnj.com/?p=2375
检查线路
# redis-cli --cluster check 172.31.7.16
# redis-trib.rb check 192.168.207.251:7001

修复reids
# redis-cli --cluster fix 172.31.7.16:6379
# redis-trib.rb fix 192.168.207.251:7001

登录redis
# redis-cli



选择则更换不了db数据库
-----------------------
127.0.0.1:6379> select 200
(error) ERR SELECT is not allowed in cluster mode

解决办法
# vi /apps/redis/etc/redis.conf 
注释集群
#cluster-enabled yes                   #是否开启集群模式。默认是单机模式
#cluster-config-file nodes-6379.conf   #由node节点自动生成的集群配置文件
#cluster-node-timeout 15000            #node节点的超时时间,15秒
#cluster-replica-validity-factor 30    #在执行故障转移的时候,可能有些节点和master断开一段时间数据比较旧,这些节点就不适合选举为master,超过这个时间不会被进行故障转移
#cluster-migration-barrier 1           #集群中至少有一个slave节点
#cluster-require-full-coverage no      #改为no即使集群中的数据丢失,用户也可以读取数据,响应对外的请求
# systemctl restart redis




哨兵启动出现问题
172.31.7.30    slave
---------------------
# redis-server /apps/redis-sentinel/sentinel.conf --sentinel
2999:X 10 May 2021 00:36:03.923 # +monitor master mymaster 172.31.7.17 6379 quorum 2
2999:X 10 May 2021 00:36:33.931 # +sdown master mymaster 172.31.7.17 6379

解决办法
哨兵配置文件增减密码
sentinel auth-pass mymaster 1




redis cluster报错
-----------------
#  /apps/redis/bin/redis-cli   --cluster create 172.31.7.17:6379  172.31.1.7.15:6379  172.31.7.14:6379  172.31.7.30:6379   --cluster-replicas 1
Could not connect to Redis at 172.31.7.17:6379: Connection refused

解决方案
 vim /apps/redis/etc/redis.conf 
daemonize yes                               #后台运行
masterauth "123456"                         #修改添加双引号
cluster-enabled yes
cluster-config-file nodes-6379.conf


删除redis节点报错
-------------------
# redis-cli -a 123456 --cluster del-node 172.31.7.17:6379  6715ae0e1eb5ba728bfc2578b733e0e81dfe09f4
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Removing node 6715ae0e1eb5ba728bfc2578b733e0e81dfe09f4 from cluster 172.31.7.17:6379
[ERR] Node 172.31.7.15:6379 is not empty! Reshard data away and try again.

解决办法
# cd /apps/redis/data
# rm nodes-6379.conf
# rm dump_6379.rdb 
# cd /apps/redis/logs
# rm redis_6379.log 

Zabbix主动监控服务起不来

这时报错,意思连接172.31.7.36-web2没有响应,那就将172.31.7.36-web2/etc/zabbix/zabbix_agentd.conf 中的Server=172.31.7.31  改为与zabbix-server服务地址一样

# vim /etc/zabbix/zabbix_agentd.conf 
Server=172.31.7.31           
ServerActive=172.31.7.31                           #zabbix-agent会向这个地址发起连接
Hostname=172.31.7.36
Timeout=30                  #连接超时时间

启动服务
# systemctl restart zabbix-agent.service 
查看问题已解决

Git代码推送不到gitlab仓库

git remote: HTTP Basic: Access denied 错误解决办法
问题描述:
git push 报 HTTP Basic: Access denied 错误

原因:本地git配置的用户名、密码与gitlabs上注册的用户名、密码不一致。

解决方案:
1. 如果账号密码有变动 用这个命令 git config –system –unset credential.helper 重新输入账号密码 应该就能解决了
2. 如果用了第一个命令 还不能解决问题那么 用这个命令:
git config –global http.emptyAuth true
3.如果以上两个方法不起作用,那么采用以下方法:
执行命令完成推送
git config --global user.email "985848343@qq.com"
git config --global user.name "zhgedu"

Saltstack启动失败

# systemctl status  salt-master
● salt-master.service - The Salt Master Server
   Loaded: loaded (/usr/lib/systemd/system/salt-master.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since 日 2021-05-30 22:52:29 CST; 9s ago
     Docs: man:salt-master(1)
           file:///usr/share/doc/salt/html/contents.html
           https://docs.saltstack.com/en/latest/contents.html
  Process: 17031 ExecStart=/usr/bin/salt-master (code=exited, status=1/FAILURE)
 Main PID: 17031 (code=exited, status=1/FAILURE)

5月 30 22:52:28 master systemd[1]: Starting The Salt Master Server...
5月 30 22:52:29 master salt-master[17031]: /usr/lib/python2.7/site-packages/salt/scripts.py:109: DeprecationWarni...ater.
5月 30 22:52:29 master salt-master[17031]: [ERROR   ] Error parsing configuration file: /etc/salt/master - did no...tart>       #配置文件已经有个配置生效,需要注释后再启动
5月 30 22:52:29 master salt-master[17031]: in "/etc/salt/master", line 837, column 1
5月 30 22:52:29 master salt-master[17031]: [ERROR   ] Error parsing configuration file: /etc/salt/master - did no...tart>
5月 30 22:52:29 master salt-master[17031]: in "/etc/salt/master", line 837, column 1
5月 30 22:52:29 master systemd[1]: salt-master.service: main process exited, code=exited, status=1/FAILURE
5月 30 22:52:29 master systemd[1]: Failed to start The Salt Master Server.
5月 30 22:52:29 master systemd[1]: Unit salt-master.service entered failed state.
5月 30 22:52:29 master systemd[1]: salt-master.service failed.
Hint: Some lines were ellipsized, use -l to show in full.


解决办法:
172.31.7.101  master
--------------------
注释此几行
vim /etc/salt/master                #注释之前开启的服务
#  file_roots:                                          
#    base:
#      - /srv/salt/

重新启动服务
# !s
# systemctl restart  salt-master.service

再开启希望开启的项目
172.31.7.101  master
--------------------
取消注释piller服务
# vim /etc/salt/master
pillar_roots:                                           
  base:
    - /srv/pillar

创建文件目录
# mkdir /srv/pillar

重新启动服务
# !s
#  systemctl restart  salt-master.service

tomcat起不来

SSH远程重启Tomcat时无法找到JAVA_HOME也连接不上SAP
# ssh  zhgedu@172.31.5.105  /usr/local/tomcat/bin/shutdown.sh
# ssh  zhgedu@172.31.5.105  /usr/local/tomcat/bin/startup.sh

# ssh  zhgedu@172.31.5.105  /usr/local/tomcat/bin/shutdown.sh 
Neither the JAVA_HOME nor the JRE_HOME environment variable is defined
At least one of these environment variable is needed to run this program


解决办法
vim  /usr/local/tomcat/bin/catalina.sh 
#!/bin/sh                              #不要添加在catalina.sh最后一行,而是添加在较靠前的位置
JAVA_HOME=/usr/local/jdk
JRE_HOME=/usr/local/jdk/jre

kafka启动报错

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x0000004, 0) failed; error='Cannot allocate memory' (errno=12)     内存不足

解决办法增加虚拟机内存
添加为2G内存



https://www.pianshen.com/article/51991982232/
解决kafka集群报错:ERROR [KafkaServer id=0] Fatal error during KafkaServer startup.
启动kafka集群时报如下错误:

ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.multi(Ljava/lang/Iterable;Lorg/apache/zookeeper/AsyncCallback$MultiCallback;Ljava/lang/Object;)V
    at kafka.zookeeper.ZooKeeperClient.send(ZooKeeperClient.scala:213)
    at kafka.zookeeper.ZooKeeperClient$$anonfun$handleRequests$1$$anonfun$apply$1.apply$mcV$sp(ZooKeeperClient.scala:144)
    at kafka.zookeeper.ZooKeeperClient$$anonfun$handleRequests$1$$anonfun$apply$1.apply(ZooKeeperClient.scala:144)
    at kafka.zookeeper.ZooKeeperClient$$anonfun$handleRequests$1$$anonfun$apply$1.apply(ZooKeeperClient.scala:144)
    at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
    at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:257)
    at kafka.zookeeper.ZooKeeperClient$$anonfun$handleRequests$1.apply(ZooKeeperClient.scala:143)
    at kafka.zookeeper.ZooKeeperClient$$anonfun$handleRequests$1.apply(ZooKeeperClient.scala:140)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at kafka.zookeeper.ZooKeeperClient.handleRequests(ZooKeeperClient.scala:140)
    at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1660)
    at kafka.zk.KafkaZkClient.retryRequestsUntilConnected(KafkaZkClient.scala:1647)
    at kafka.zk.KafkaZkClient.retryRequestUntilConnected(KafkaZkClient.scala:1642)
    at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1712)
    at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1689)
    at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:97)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:260)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
    at kafka.Kafka$.main(Kafka.scala:75)
    at kafka.Kafka.main(Kafka.scala)
[2020-09-29 09:02:36,476] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2020-09-29 09:02:36,479] INFO [SocketServer brokerId=0] Stopping socket server request processors (kafka.network.SocketServer)
[2020-09-29 09:02:36,486] INFO [SocketServer brokerId=0] Stopped socket server request processors (kafka.network.SocketServer)
[2020-09-29 09:02:36,499] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager)
[2020-09-29 09:02:36,501] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-09-29 09:02:36,501] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-09-29 09:02:36,502] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-09-29 09:02:36,503] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager)
[2020-09-29 09:02:36,505] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager)
[2020-09-29 09:02:36,506] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager)
[2020-09-29 09:02:36,507] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager)
[2020-09-29 09:02:36,507] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,641] INFO [ExpirationReaper-0-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,641] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,642] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,841] INFO [ExpirationReaper-0-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,841] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,841] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,843] INFO [ExpirationReaper-0-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,843] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,843] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,845] INFO [ExpirationReaper-0-ElectPreferredLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,845] INFO [ExpirationReaper-0-ElectPreferredLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-09-29 09:02:36,857] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager)
[2020-09-29 09:02:36,861] INFO Shutting down. (kafka.log.LogManager)
[2020-09-29 09:02:36,934] INFO Shutdown complete. (kafka.log.LogManager)
[2020-09-29 09:02:36,941] INFO [ZooKeeperClient] Closing. (kafka.zookeeper.ZooKeeperClient)
[2020-09-29 09:02:36,946] INFO Session: 0x1000163dff20000 closed (org.apache.zookeeper.ZooKeeper)
[2020-09-29 09:02:36,948] INFO EventThread shut down (org.apache.zookeeper.ClientCnxn)
[2020-09-29 09:02:36,953] INFO [ZooKeeperClient] Closed. (kafka.zookeeper.ZooKeeperClient)
[2020-09-29 09:02:36,955] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,095] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,095] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,095] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,097] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,098] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,098] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,101] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,101] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-09-29 09:02:37,104] INFO [SocketServer brokerId=0] Shutting down socket server (kafka.network.SocketServer)
[2020-09-29 09:02:37,194] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
[2020-09-29 09:02:37,205] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2020-09-29 09:02:37,205] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2020-09-29 09:02:37,214] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
 

原因是启动kafka集群的用户不一致,多次启动导致zookeeper无法守护kafka。


解决方法:
删掉kafka集群每个节点的logs文件
# cd /apps/kafka  && rm  logs

再分别进入kafka集群每个节点的config目录下关闭所有kafka:
/apps/kafka/bin/kafka-server-stop.sh  /apps/kafka/config/server.properties &

然后关闭在所有节点下关闭所有zookeeper进程。
/apps/apache-zookeeper-3.6.3-bin/bin/zkServer.sh stop

再重新启动zookeeper集群跟kafka集群
/apps/apache-zookeeper-3.6.3-bin/bin/zkServer.sh start 
/apps/kafka/bin/kafka-server-start.sh  /apps/kafka/config/server.properties &

最后问题解决。

ELK-logstash启动报错

[troll@standalone logstash-6.4.3]$ bin/logstash -f data-conf/zipkin-server.conf 
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-11-07T22:06:31,926][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-11-07T22:06:34,956][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.3"}
[2018-11-07T22:06:35,299][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/home/troll/softs/logstash-6.4.3/data-conf/zipkin-server.conf"}
[2018-11-07T22:06:35,348][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2018-11-07T22:06:36,660][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

遇到该问题通常有几种原因造成:
解决logstash报错[ERROR][logstash.config.sourceloader] No configuration found in the configured sources.

配置文件没有读取权限
配置文件不存在
文件名错误
执行logstash服务
# cd /etc/logstash/conf.d                  #进入配置文件路径在执行
# /usr/share/logstash/bin/logstash -f test.conf





Logstash报错:Logstash could not be started because there is already another instance using the configured data directory
错误一:
1、错误提示:

Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2019-12-26T07:31:29,884][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-12-26T07:31:30,007][FATAL][logstash.runner ] Logstash could not be started because there is already another instance using the configured data directory. If you wish to run multiple instances, you must change the "path.data" setting.
[2019-12-26T07:31:30,026][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit

解决办法:
2、原因:之前运行的instance有缓冲,保存在path.data里面有.lock文件,删除掉就可以。
# cd  /usr/share/logstash/data  &&  rm -rf .lock

yum mamache 报错

修改文件路径,路径错误
换ceph源:
echo "

[Ceph] 
name=Ceph packages for  
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/\$basearch      #---->加反斜杠,不转意
enabled=1
gpgcheck=1 
type=rpm-md 
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc 

[Ceph-noarch] 
name=Ceph noarch packages 
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/noarch 
enabled=1 
gpgcheck=1 
type=rpm-md 
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc 

[ceph-source] 
name=Ceph source packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-mimic/el7/SRPMS 
enabled=1 
gpgcheck=1 
type=rpm-md 
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc"  >  /etc/yum.repos.d/ceph.repo
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值