Warning: failed to get default registry endpoint from daemon

操作系统:CentOS 7
执行命令:docker infodocker searchdocker pull
执行用户:非root,有sudo权限

1、报错现象及原因

报错:Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon. Is the docker daemon running on this host?).
原因:刚装完docker服务没有启动,启动命令如下

$ sudo systemctl start docker.service
2、其它报错

启动后不加sudo 操作诸如docker ps的命令依然会报以下错误
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/info: dial unix /var/run/docker.sock: connect: permission denied
需要添加当前用户到docker用户组,命令如下

$ sudo usermod -aG docker $USER && newgrp docker
3、配置docker开机自启动
# sudo systemctl enable docker.service
[root@node1 ~]# docker logs 85e71835d204 Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I0730 09:55:21.321773 1 server.go:632] external host was not specified, using 192.168.229.145 I0730 09:55:21.322237 1 server.go:182] Version: v1.20.9 I0730 09:55:21.547240 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:21.547259 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:21.548307 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:21.548318 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:21.550423 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:21.550477 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:21.550717 1 shared_informer.go:240] Waiting for caches to sync for node_authorizer I0730 09:55:22.138571 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.138611 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.144633 1 client.go:360] parsed scheme: "passthrough" I0730 09:55:22.144800 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:55:22.144828 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:55:22.146169 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.146192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.262294 1 instance.go:289] Using reconciler: lease I0730 09:55:22.262855 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.262883 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.275807 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.275869 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.283173 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.283213 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.290321 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.290362 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.296709 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.296788 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.301871 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.301902 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.307806 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.307838 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.313365 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.313392 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.319529 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.319557 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.324517 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.324536 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.331886 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.331925 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.339523 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.339587 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.346920 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.347079 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.352814 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.352836 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.358384 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.358443 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.364859 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.364886 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.370633 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.370680 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.376711 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.376734 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.382656 1 rest.go:131] the default service ipfamily for this cluster is: IPv4 I0730 09:55:22.504261 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.504291 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.528497 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.528568 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.538368 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.538453 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.547338 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.547389 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.556834 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.556861 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.563972 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.564021 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.565476 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.565530 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.575236 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.575291 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.602757 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.602821 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.611140 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.611192 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.619221 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.619285 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.626667 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.626695 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.634819 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.634874 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.642543 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.642572 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.652458 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.652487 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.659228 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.659267 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.665840 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.665891 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.673386 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.673458 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.680779 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.680809 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.688424 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.688479 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.697923 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.697969 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.705098 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.705130 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.716766 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.716819 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.724379 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.724441 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.731383 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.731449 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.737740 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.737796 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.744905 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.744961 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.756486 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.756510 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.763029 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.763074 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.774316 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.774343 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.781013 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.781063 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.787849 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.787898 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.799124 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.799153 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.808795 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.808849 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.837930 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.837964 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.851372 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.851441 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.860141 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.860194 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.880323 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.880353 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.896262 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.896292 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.903896 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.903923 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.912570 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.912598 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.945966 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.946025 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.957156 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.957337 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.966709 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.966752 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.975373 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.975469 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.985516 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.985566 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:22.994780 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:22.994842 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.002474 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.002502 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.011694 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.011729 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.019030 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.019082 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.026717 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.026769 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.034206 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.034233 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] W0730 09:55:23.272674 1 genericapiserver.go:425] Skipping API batch/v2alpha1 because it has no resources. W0730 09:55:23.278301 1 genericapiserver.go:425] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.286644 1 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.298757 1 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.301603 1 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.305468 1 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.307443 1 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0730 09:55:23.311186 1 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources. W0730 09:55:23.311215 1 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources. I0730 09:55:23.324600 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0730 09:55:23.324655 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0730 09:55:23.326518 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.326547 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:23.333081 1 client.go:360] parsed scheme: "endpoint" I0730 09:55:23.333098 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0730 09:55:25.088013 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0730 09:55:25.088058 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:25.088324 1 dynamic_serving_content.go:130] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key I0730 09:55:25.088418 1 secure_serving.go:197] Serving securely on [::]:6443 I0730 09:55:25.088495 1 autoregister_controller.go:141] Starting autoregister controller I0730 09:55:25.088526 1 cache.go:32] Waiting for caches to sync for autoregister controller I0730 09:55:25.088557 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/etc/kubernetes/pki/front-proxy-client.crt::/etc/kubernetes/pki/front-proxy-client.key I0730 09:55:25.088560 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0730 09:55:25.088615 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0730 09:55:25.088619 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0730 09:55:25.088714 1 available_controller.go:475] Starting AvailableConditionController I0730 09:55:25.088718 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0730 09:55:25.088746 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0730 09:55:25.088752 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0730 09:55:25.089052 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0730 09:55:25.090085 1 controller.go:83] Starting OpenAPI AggregationController I0730 09:55:25.090661 1 apf_controller.go:261] Starting API Priority and Fairness config controller I0730 09:55:25.095434 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0730 09:55:25.095446 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0730 09:55:25.095515 1 controller.go:86] Starting OpenAPI controller I0730 09:55:25.095531 1 naming_controller.go:291] Starting NamingConditionController I0730 09:55:25.095565 1 establishing_controller.go:76] Starting EstablishingController I0730 09:55:25.095579 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0730 09:55:25.095593 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0730 09:55:25.095604 1 crd_finalizer.go:266] Starting CRDFinalizer I0730 09:55:25.097471 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:25.097547 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt E0730 09:55:25.100371 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.229.145, ResourceVersion: 0, AdditionalErrorMsg: I0730 09:55:25.150947 1 shared_informer.go:247] Caches are synced for node_authorizer I0730 09:55:25.198738 1 apf_controller.go:266] Running API Priority and Fairness config worker I0730 09:55:25.198911 1 cache.go:39] Caches are synced for autoregister controller I0730 09:55:25.199324 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0730 09:55:25.199340 1 cache.go:39] Caches are synced for AvailableConditionController controller I0730 09:55:25.199353 1 shared_informer.go:247] Caches are synced for crd-autoregister I0730 09:55:25.199442 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0730 09:55:25.226411 1 controller.go:609] quota admission added evaluator for: namespaces I0730 09:55:26.087754 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0730 09:55:26.087813 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0730 09:55:26.099842 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I0730 09:55:26.103129 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I0730 09:55:26.103147 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0730 09:55:26.490805 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0730 09:55:26.524236 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0730 09:55:26.624643 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.229.145] I0730 09:55:26.625375 1 controller.go:609] quota admission added evaluator for: endpoints I0730 09:55:26.628224 1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io I0730 09:55:28.611886 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io I0730 09:55:29.087009 1 controller.go:609] quota admission added evaluator for: serviceaccounts I0730 09:55:56.812958 1 client.go:360] parsed scheme: "passthrough" I0730 09:55:56.813001 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:55:56.813008 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:56:27.580990 1 client.go:360] parsed scheme: "passthrough" I0730 09:56:27.581052 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:56:27.581064 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:56:58.136069 1 client.go:360] parsed scheme: "passthrough" I0730 09:56:58.136122 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:56:58.136134 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:57:33.643030 1 client.go:360] parsed scheme: "passthrough" I0730 09:57:33.643108 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:57:33.643119 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:58:15.787556 1 client.go:360] parsed scheme: "passthrough" I0730 09:58:15.787626 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:58:15.787637 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:58:46.620841 1 client.go:360] parsed scheme: "passthrough" I0730 09:58:46.620879 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:58:46.620884 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 09:59:28.044387 1 client.go:360] parsed scheme: "passthrough" I0730 09:59:28.044446 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 09:59:28.044453 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:00:12.134065 1 client.go:360] parsed scheme: "passthrough" I0730 10:00:12.134096 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:00:12.134102 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:00:52.210433 1 client.go:360] parsed scheme: "passthrough" I0730 10:00:52.210482 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:00:52.210491 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:01:33.985297 1 client.go:360] parsed scheme: "passthrough" I0730 10:01:33.985351 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:01:33.985369 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:02:17.490682 1 client.go:360] parsed scheme: "passthrough" I0730 10:02:17.490719 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:02:17.490725 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:02:53.017984 1 client.go:360] parsed scheme: "passthrough" I0730 10:02:53.018108 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:02:53.018138 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0730 10:03:31.267486 1 client.go:360] parsed scheme: "passthrough" I0730 10:03:31.267537 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>} I0730 10:03:31.267543 1 clientconn.go:948] ClientConn switching balancer to "pick_first" [root@node1 ~]# [root@node1 ~]# docker logs 5fb4dcb49e9a [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2025-07-30 09:55:21.067915 I | etcdmain: etcd Version: 3.4.13 2025-07-30 09:55:21.067944 I | etcdmain: Git SHA: ae9734ed2 2025-07-30 09:55:21.067948 I | etcdmain: Go Version: go1.12.17 2025-07-30 09:55:21.067950 I | etcdmain: Go OS/Arch: linux/amd64 2025-07-30 09:55:21.067953 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2025-07-30 09:55:21.068013 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 2025-07-30 09:55:21.069505 I | embed: name = node1 2025-07-30 09:55:21.069518 I | embed: data dir = /var/lib/etcd 2025-07-30 09:55:21.069522 I | embed: member dir = /var/lib/etcd/member 2025-07-30 09:55:21.069525 I | embed: heartbeat = 100ms 2025-07-30 09:55:21.069528 I | embed: election = 1000ms 2025-07-30 09:55:21.069530 I | embed: snapshot count = 10000 2025-07-30 09:55:21.069555 I | embed: advertise client URLs = https://192.168.229.145:2379 2025-07-30 09:55:21.086105 I | etcdserver: starting member 25a5889883ba023f in cluster dbb04bfad86388d6 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=() raft2025/07/30 09:55:21 INFO: 25a5889883ba023f became follower at term 0 raft2025/07/30 09:55:21 INFO: newRaft 25a5889883ba023f [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2025/07/30 09:55:21 INFO: 25a5889883ba023f became follower at term 1 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=(2712724539187003967) 2025-07-30 09:55:21.112113 W | auth: simple token is not cryptographically signed 2025-07-30 09:55:21.115844 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2025-07-30 09:55:21.123343 I | etcdserver: 25a5889883ba023f as single-node; fast-forwarding 9 ticks (election ticks 10) 2025-07-30 09:55:21.124008 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 2025-07-30 09:55:21.124333 I | embed: listening for metrics on http://127.0.0.1:2381 2025-07-30 09:55:21.124737 I | embed: listening for peers on 192.168.229.145:2380 raft2025/07/30 09:55:21 INFO: 25a5889883ba023f switched to configuration voters=(2712724539187003967) 2025-07-30 09:55:21.125960 I | etcdserver/membership: added member 25a5889883ba023f [https://192.168.229.145:2380] to cluster dbb04bfad86388d6 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f is starting a new election at term 1 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f became candidate at term 2 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f received MsgVoteResp from 25a5889883ba023f at term 2 raft2025/07/30 09:55:22 INFO: 25a5889883ba023f became leader at term 2 raft2025/07/30 09:55:22 INFO: raft.node: 25a5889883ba023f elected leader 25a5889883ba023f at term 2 2025-07-30 09:55:22.110626 I | etcdserver: setting up the initial cluster version to 3.4 2025-07-30 09:55:22.111512 N | etcdserver/membership: set the initial cluster version to 3.4 2025-07-30 09:55:22.111566 I | etcdserver/api: enabled capabilities for version 3.4 2025-07-30 09:55:22.111617 I | etcdserver: published {Name:node1 ClientURLs:[https://192.168.229.145:2379]} to cluster dbb04bfad86388d6 2025-07-30 09:55:22.111706 I | embed: ready to serve client requests 2025-07-30 09:55:22.111722 I | embed: ready to serve client requests 2025-07-30 09:55:22.113722 I | embed: serving client requests on 192.168.229.145:2379 2025-07-30 09:55:22.113948 I | embed: serving client requests on 127.0.0.1:2379 2025-07-30 09:55:37.917153 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:55:41.816410 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:55:51.816930 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:01.816886 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:11.817562 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:21.817717 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:31.816595 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:41.816625 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:56:51.817154 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:01.817686 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:11.816756 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:21.817343 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:31.817003 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:41.816873 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:57:51.817207 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:01.816804 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:11.817017 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:21.816642 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:31.816838 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:41.816347 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:58:51.816459 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:01.817140 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:11.816554 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:21.816869 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:31.818716 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:41.816705 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 09:59:51.817851 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:01.816829 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:11.816569 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:21.816748 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:31.817200 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:41.816544 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:00:51.816793 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:01.817000 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:11.816522 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:21.816775 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:31.816684 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:41.816727 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:01:51.817073 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:01.817473 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:11.816961 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:21.817283 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:31.816549 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:41.817031 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:02:51.816960 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:01.816855 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:11.816837 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:21.816528 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:31.816693 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:41.816522 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:03:51.817027 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:04:01.816739 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2025-07-30 10:04:11.816916 I | etcdserver/api/etcdhttp: /health OK (status code 200) [root@node1 ~]# docker logs ff9b8d7587aa Flag --port has been deprecated, see --secure-port instead. I0730 09:55:21.608626 1 serving.go:331] Generated self-signed cert in-memory I0730 09:55:21.958575 1 controllermanager.go:176] Version: v1.20.9 I0730 09:55:21.960121 1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt I0730 09:55:21.960150 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt I0730 09:55:21.961003 1 secure_serving.go:197] Serving securely on 127.0.0.1:10257 I0730 09:55:21.961148 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0730 09:55:21.961191 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager... E0730 09:55:25.271255 1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system" I0730 09:55:28.613626 1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager I0730 09:55:28.613892 1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="node1_503ebc03-4f42-47e4-9c9d-e9064b26d78b became leader" I0730 09:55:29.083585 1 shared_informer.go:240] Waiting for caches to sync for tokens I0730 09:55:29.183810 1 shared_informer.go:247] Caches are synced for tokens I0730 09:55:29.540299 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for limitranges I0730 09:55:29.540448 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for events.events.k8s.io I0730 09:55:29.540486 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for cronjobs.batch I0730 09:55:29.540502 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0730 09:55:29.540561 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy I0730 09:55:29.540575 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpoints I0730 09:55:29.540626 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for daemonsets.apps I0730 09:55:29.540672 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for controllerrevisions.apps I0730 09:55:29.540690 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I0730 09:55:29.540706 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io W0730 09:55:29.540717 1 shared_informer.go:494] resyncPeriod 20h21m59.062252565s is smaller than resyncCheckPeriod 22h46m52.416059627s and the informer has already started. Changing it to 22h46m52.416059627s I0730 09:55:29.540798 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io I0730 09:55:29.540832 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for deployments.apps I0730 09:55:29.540845 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for jobs.batch W0730 09:55:29.540850 1 shared_informer.go:494] resyncPeriod 13h23m0.496914786s is smaller than resyncCheckPeriod 22h46m52.416059627s and the informer has already started. Changing it to 22h46m52.416059627s I0730 09:55:29.540864 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for serviceaccounts I0730 09:55:29.540876 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for statefulsets.apps I0730 09:55:29.540886 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for replicasets.apps I0730 09:55:29.540898 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io I0730 09:55:29.540909 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io I0730 09:55:29.540920 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for podtemplates I0730 09:55:29.540980 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0730 09:55:29.541016 1 resource_quota_monitor.go:229] QuotaMonitor created object count evaluator for ingresses.extensions I0730 09:55:29.541024 1 controllermanager.go:554] Started "resourcequota" I0730 09:55:29.541243 1 resource_quota_controller.go:273] Starting resource quota controller I0730 09:55:29.541254 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0730 09:55:29.541267 1 resource_quota_monitor.go:304] QuotaMonitor running I0730 09:55:29.546774 1 node_lifecycle_controller.go:77] Sending events to api server E0730 09:55:29.546816 1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided W0730 09:55:29.546824 1 controllermanager.go:546] Skipping "cloud-node-lifecycle" W0730 09:55:29.546837 1 controllermanager.go:546] Skipping "ttl-after-finished" W0730 09:55:29.546843 1 controllermanager.go:546] Skipping "ephemeral-volume" I0730 09:55:29.553569 1 controllermanager.go:554] Started "garbagecollector" I0730 09:55:29.553994 1 garbagecollector.go:142] Starting garbage collector controller I0730 09:55:29.554019 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0730 09:55:29.554078 1 graph_builder.go:289] GraphBuilder running I0730 09:55:29.559727 1 controllermanager.go:554] Started "statefulset" I0730 09:55:29.559830 1 stateful_set.go:146] Starting stateful set controller I0730 09:55:29.559837 1 shared_informer.go:240] Waiting for caches to sync for stateful set I0730 09:55:29.570038 1 controllermanager.go:554] Started "bootstrapsigner" I0730 09:55:29.570166 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer E0730 09:55:29.577020 1 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail W0730 09:55:29.577049 1 controllermanager.go:546] Skipping "service" I0730 09:55:29.583964 1 controllermanager.go:554] Started "clusterrole-aggregation" I0730 09:55:29.584128 1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator I0730 09:55:29.584145 1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator I0730 09:55:29.590189 1 controllermanager.go:554] Started "podgc" I0730 09:55:29.590277 1 gc_controller.go:89] Starting GC controller I0730 09:55:29.590285 1 shared_informer.go:240] Waiting for caches to sync for GC I0730 09:55:29.611665 1 controllermanager.go:554] Started "namespace" I0730 09:55:29.611840 1 namespace_controller.go:200] Starting namespace controller I0730 09:55:29.611851 1 shared_informer.go:240] Waiting for caches to sync for namespace I0730 09:55:29.644607 1 controllermanager.go:554] Started "replicaset" I0730 09:55:29.644643 1 replica_set.go:182] Starting replicaset controller I0730 09:55:29.644648 1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet I0730 09:55:29.794545 1 controllermanager.go:554] Started "pvc-protection" I0730 09:55:29.794636 1 pvc_protection_controller.go:110] Starting PVC protection controller I0730 09:55:29.794643 1 shared_informer.go:240] Waiting for caches to sync for PVC protection I0730 09:55:29.947340 1 controllermanager.go:554] Started "endpointslicemirroring" I0730 09:55:29.947470 1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller I0730 09:55:29.947481 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring I0730 09:55:30.096177 1 controllermanager.go:554] Started "deployment" I0730 09:55:30.096245 1 deployment_controller.go:153] Starting deployment controller I0730 09:55:30.096252 1 shared_informer.go:240] Waiting for caches to sync for deployment I0730 09:55:30.143147 1 node_ipam_controller.go:91] Sending events to api server. I0730 09:55:30.689188 1 request.go:655] Throttling request took 1.048578443s, request: GET:https://192.168.229.145:6443/apis/storage.k8s.io/v1?timeout=32s I0730 09:55:40.164520 1 range_allocator.go:82] Sending events to api server. I0730 09:55:40.164715 1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses. I0730 09:55:40.164875 1 controllermanager.go:554] Started "nodeipam" I0730 09:55:40.164991 1 node_ipam_controller.go:159] Starting ipam controller I0730 09:55:40.164996 1 shared_informer.go:240] Waiting for caches to sync for node I0730 09:55:40.170277 1 controllermanager.go:554] Started "serviceaccount" I0730 09:55:40.170396 1 serviceaccounts_controller.go:117] Starting service account controller I0730 09:55:40.170413 1 shared_informer.go:240] Waiting for caches to sync for service account I0730 09:55:40.187598 1 controllermanager.go:554] Started "horizontalpodautoscaling" I0730 09:55:40.187715 1 horizontal.go:169] Starting HPA controller I0730 09:55:40.187729 1 shared_informer.go:240] Waiting for caches to sync for HPA I0730 09:55:40.193364 1 controllermanager.go:554] Started "ttl" I0730 09:55:40.193485 1 ttl_controller.go:121] Starting TTL controller I0730 09:55:40.193491 1 shared_informer.go:240] Waiting for caches to sync for TTL I0730 09:55:40.194930 1 node_lifecycle_controller.go:380] Sending events to api server. I0730 09:55:40.195077 1 taint_manager.go:163] Sending events to api server. I0730 09:55:40.195125 1 node_lifecycle_controller.go:508] Controller will reconcile labels. I0730 09:55:40.195140 1 controllermanager.go:554] Started "nodelifecycle" W0730 09:55:40.195149 1 core.go:246] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes. W0730 09:55:40.195152 1 controllermanager.go:546] Skipping "route" I0730 09:55:40.195358 1 node_lifecycle_controller.go:542] Starting node controller I0730 09:55:40.195365 1 shared_informer.go:240] Waiting for caches to sync for taint I0730 09:55:40.204190 1 controllermanager.go:554] Started "root-ca-cert-publisher" I0730 09:55:40.204295 1 publisher.go:98] Starting root CA certificate configmap publisher I0730 09:55:40.204302 1 shared_informer.go:240] Waiting for caches to sync for crt configmap I0730 09:55:40.213208 1 controllermanager.go:554] Started "endpointslice" I0730 09:55:40.213325 1 endpointslice_controller.go:237] Starting endpoint slice controller I0730 09:55:40.213331 1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice I0730 09:55:40.221793 1 controllermanager.go:554] Started "replicationcontroller" I0730 09:55:40.221900 1 replica_set.go:182] Starting replicationcontroller controller I0730 09:55:40.221906 1 shared_informer.go:240] Waiting for caches to sync for ReplicationController I0730 09:55:40.233083 1 controllermanager.go:554] Started "disruption" I0730 09:55:40.233218 1 disruption.go:331] Starting disruption controller I0730 09:55:40.233225 1 shared_informer.go:240] Waiting for caches to sync for disruption I0730 09:55:40.243452 1 controllermanager.go:554] Started "cronjob" I0730 09:55:40.243529 1 cronjob_controller.go:96] Starting CronJob Manager I0730 09:55:40.454779 1 controllermanager.go:554] Started "csrcleaner" I0730 09:55:40.454818 1 cleaner.go:82] Starting CSR cleaner controller I0730 09:55:40.605277 1 controllermanager.go:554] Started "persistentvolume-binder" I0730 09:55:40.605328 1 pv_controller_base.go:307] Starting persistent volume controller I0730 09:55:40.605334 1 shared_informer.go:240] Waiting for caches to sync for persistent volume I0730 09:55:40.754979 1 controllermanager.go:554] Started "pv-protection" I0730 09:55:40.755043 1 pv_protection_controller.go:83] Starting PV protection controller I0730 09:55:40.755055 1 shared_informer.go:240] Waiting for caches to sync for PV protection I0730 09:55:40.904412 1 controllermanager.go:554] Started "endpoint" I0730 09:55:40.904457 1 endpoints_controller.go:184] Starting endpoint controller I0730 09:55:40.904491 1 shared_informer.go:240] Waiting for caches to sync for endpoint I0730 09:55:41.056168 1 controllermanager.go:554] Started "daemonset" I0730 09:55:41.056220 1 daemon_controller.go:285] Starting daemon sets controller I0730 09:55:41.056226 1 shared_informer.go:240] Waiting for caches to sync for daemon sets I0730 09:55:41.103979 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-serving" I0730 09:55:41.103997 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving I0730 09:55:41.104013 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104264 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kubelet-client" I0730 09:55:41.104271 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client I0730 09:55:41.104299 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104766 1 certificate_controller.go:118] Starting certificate controller "csrsigning-kube-apiserver-client" I0730 09:55:41.104772 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client I0730 09:55:41.104780 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.104874 1 controllermanager.go:554] Started "csrsigning" I0730 09:55:41.104912 1 certificate_controller.go:118] Starting certificate controller "csrsigning-legacy-unknown" I0730 09:55:41.104917 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown I0730 09:55:41.104937 1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key I0730 09:55:41.255923 1 controllermanager.go:554] Started "tokencleaner" I0730 09:55:41.255988 1 tokencleaner.go:118] Starting token cleaner controller I0730 09:55:41.255993 1 shared_informer.go:240] Waiting for caches to sync for token_cleaner I0730 09:55:41.255997 1 shared_informer.go:247] Caches are synced for token_cleaner I0730 09:55:41.405554 1 controllermanager.go:554] Started "attachdetach" I0730 09:55:41.405622 1 attach_detach_controller.go:329] Starting attach detach controller I0730 09:55:41.405628 1 shared_informer.go:240] Waiting for caches to sync for attach detach I0730 09:55:41.554527 1 controllermanager.go:554] Started "persistentvolume-expander" I0730 09:55:41.554606 1 expand_controller.go:310] Starting expand controller I0730 09:55:41.554612 1 shared_informer.go:240] Waiting for caches to sync for expand I0730 09:55:41.705883 1 controllermanager.go:554] Started "job" I0730 09:55:41.705932 1 job_controller.go:148] Starting job controller I0730 09:55:41.705938 1 shared_informer.go:240] Waiting for caches to sync for job I0730 09:55:41.754446 1 controllermanager.go:554] Started "csrapproving" I0730 09:55:41.754722 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0730 09:55:41.755111 1 certificate_controller.go:118] Starting certificate controller "csrapproving" I0730 09:55:41.755119 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving I0730 09:55:41.774691 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0730 09:55:41.784409 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0730 09:55:41.793636 1 shared_informer.go:247] Caches are synced for TTL I0730 09:55:41.804153 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0730 09:55:41.804368 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0730 09:55:41.804368 1 shared_informer.go:247] Caches are synced for crt configmap I0730 09:55:41.804860 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0730 09:55:41.805048 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0730 09:55:41.811971 1 shared_informer.go:247] Caches are synced for namespace I0730 09:55:41.847535 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0730 09:55:41.855186 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0730 09:55:41.865229 1 shared_informer.go:247] Caches are synced for node I0730 09:55:41.865293 1 range_allocator.go:172] Starting range CIDR allocator I0730 09:55:41.865298 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0730 09:55:41.865301 1 shared_informer.go:247] Caches are synced for cidrallocator I0730 09:55:41.870566 1 shared_informer.go:247] Caches are synced for service account E0730 09:55:41.877240 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I0730 09:55:41.954718 1 shared_informer.go:247] Caches are synced for expand I0730 09:55:41.955116 1 shared_informer.go:247] Caches are synced for PV protection I0730 09:55:42.013590 1 shared_informer.go:247] Caches are synced for endpoint_slice I0730 09:55:42.022213 1 shared_informer.go:247] Caches are synced for ReplicationController I0730 09:55:42.033328 1 shared_informer.go:247] Caches are synced for disruption I0730 09:55:42.033350 1 disruption.go:339] Sending events to api server. I0730 09:55:42.041423 1 shared_informer.go:247] Caches are synced for resource quota I0730 09:55:42.045150 1 shared_informer.go:247] Caches are synced for ReplicaSet I0730 09:55:42.054792 1 shared_informer.go:247] Caches are synced for resource quota I0730 09:55:42.056647 1 shared_informer.go:247] Caches are synced for daemon sets I0730 09:55:42.059966 1 shared_informer.go:247] Caches are synced for stateful set I0730 09:55:42.088174 1 shared_informer.go:247] Caches are synced for HPA I0730 09:55:42.090312 1 shared_informer.go:247] Caches are synced for GC I0730 09:55:42.094795 1 shared_informer.go:247] Caches are synced for PVC protection I0730 09:55:42.095608 1 shared_informer.go:247] Caches are synced for taint I0730 09:55:42.095970 1 taint_manager.go:187] Starting NoExecuteTaintManager I0730 09:55:42.096344 1 shared_informer.go:247] Caches are synced for deployment I0730 09:55:42.104584 1 shared_informer.go:247] Caches are synced for endpoint I0730 09:55:42.105378 1 shared_informer.go:247] Caches are synced for persistent volume I0730 09:55:42.105717 1 shared_informer.go:247] Caches are synced for attach detach I0730 09:55:42.105975 1 shared_informer.go:247] Caches are synced for job I0730 09:55:42.215619 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0730 09:55:42.516093 1 shared_informer.go:247] Caches are synced for garbage collector I0730 09:55:42.554168 1 shared_informer.go:247] Caches are synced for garbage collector I0730 09:55:42.554210 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage [root@node1 ~]# docker logs 3112cf504b56 I0730 09:55:21.400259 1 serving.go:331] Generated self-signed cert in-memory W0730 09:55:25.261574 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0730 09:55:25.261664 1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0730 09:55:25.261682 1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous. W0730 09:55:25.261687 1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0730 09:55:25.292394 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0730 09:55:25.292498 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:25.292528 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:25.293029 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0730 09:55:25.295710 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0730 09:55:25.297091 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0730 09:55:25.297190 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0730 09:55:25.297221 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0730 09:55:25.297272 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0730 09:55:25.297376 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0730 09:55:25.297420 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0730 09:55:25.297718 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0730 09:55:25.297832 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0730 09:55:25.297872 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0730 09:55:25.297950 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0730 09:55:25.299623 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0730 09:55:26.134757 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0730 09:55:26.376729 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0730 09:55:26.380160 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0730 09:55:26.892586 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0730 09:55:28.692876 1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-scheduler... I0730 09:55:28.699723 1 leaderelection.go:253] successfully acquired lease kube-system/kube-scheduler 这是我的日志帮我看看有什么问题
07-31
Xshell 8 (Build 0082) Copyright (c) 2024 NetSarang Computer, Inc. All rights reserved. Type `help' to learn how to use Xshell prompt. [C:\~]$ Connecting to 192.168.200.131:22... Connection established. To escape to local shell, press 'Ctrl+Alt+]'. Last login: Wed Aug 27 04:37:33 2025 from 192.168.200.1 [yywz@localhost ~]$ su root 密码: [root@localhost yywz]# docker-compose up -d Creating network "yywz_default" with the default driver Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [root@localhost yywz]# sudo systemctl daemon-reload [root@localhost yywz]# sudo systemctl restart docker [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": dial tcp 128.242.240.20:443: i/o timeout [root@localhost yywz]# sudo mkdir -p /etc/docker [root@localhost yywz]# sudo tee /etc/docker/daemon.json <<-'EOF' > { > "registry-mirrors": [ > "https://hub-mirror.c.163.com", > "https://mirror.baidubce.com", > "https://docker.mirrors.ustc.edu.cn", > "https://registry.docker-cn.com" > ], > "insecure-registries": [], > "debug": false > } > EOF { "registry-mirrors": [ "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com" ], "insecure-registries": [], "debug": false } [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": dial tcp 108.160.170.39:443: i/o timeout [root@localhost yywz]# cat /etc/docker/daemon.json { "registry-mirrors": [ "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com" ], "insecure-registries": [], "debug": false } [root@localhost yywz]# systemctl restart docker.service [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... ERROR: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) [root@localhost yywz]# sudo tee /etc/docker/daemon.json <<-'EOF' > { > "registry-mirrors": [ > "https://docker.211678.top", > "https://docker.1panel.live", > "https://hub.rat.dev", > "https://docker.m.daocloud.io", > "https://do.nark.eu.org", > "https://dockerpull.com", > "https://dockerproxy.cn", > "https://docker.awsl9527.cn" > ] > } > EOF { "registry-mirrors": [ "https://docker.211678.top", "https://docker.1panel.live", "https://hub.rat.dev", "https://docker.m.daocloud.io", "https://do.nark.eu.org", "https://dockerpull.com", "https://dockerproxy.cn", "https://docker.awsl9527.cn" ] } [root@localhost yywz]# sudo systemctl daemon-reload [root@localhost yywz]# systemctl restart docker.service [root@localhost yywz]# docker-compose up -d Pulling nginx (nginx:alpine)... alpine: Pulling from library/nginx 9824c27679d3: Pull complete 6bc572a340ec: Pull complete 403e3f251637: Pull complete 9adfbae99cb7: Pull complete 7a8a46741e18: Pull complete c9ebe2ff2d2c: Pull complete a992fbc61ecc: Pull complete cb1ff4086f82: Pull complete Digest: sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8 Status: Downloaded newer image for nginx:alpine Pulling redis (redis:alpine)... alpine: Pulling from library/redis 9824c27679d3: Already exists 9880d81ff87a: Pull complete 168694ef5d62: Pull complete f8eab6d4856e: Pull complete 1f79dac8d2d4: Pull complete 4f4fb700ef54: Pull complete 61cfb50eeff3: Pull complete Digest: sha256:987c376c727652f99625c7d205a1cba3cb2c53b92b0b62aade2bd48ee1593232 Status: Downloaded newer image for redis:alpine Creating yywz_nginx_1 ... done Creating yywz_redis_1 ... done [root@localhost yywz]# docker -v Docker version 20.10.24, build 297e128 [root@localhost yywz]# systemctl status dockerdocker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:14:00 CST; 4min 56s ago Docs: https://docs.docker.com Main PID: 9737 (dockerd) Tasks: 46 Memory: 138.6M CGroup: /system.slice/docker.service ├─ 9737 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ├─10144 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.19.0.2 -... ├─10151 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 80 -container-ip 172.19.0.2 -conta... ├─10170 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6379 -container-ip 172.19.0.3... └─10178 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 6379 -container-ip 172.19.0.3 -con... 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.853107542+08:00" level=inf....24 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.853240645+08:00" level=inf...on" 8月 27 15:14:00 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:14:00 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:00.891163395+08:00" level=inf...ck" 8月 27 15:14:20 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:20.265981649+08:00" level=war...ut" 8月 27 15:14:20 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:20.266117106+08:00" level=inf...ut" 8月 27 15:14:49 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:49.034510681+08:00" level=war...ut" 8月 27 15:14:49 localhost.localdomain dockerd[9737]: time="2025-08-27T15:14:49.034659945+08:00" level=inf...ut" 8月 27 15:15:01 localhost.localdomain dockerd[9737]: time="2025-08-27T15:15:01+08:00" level=info msg="Fir...ng" 8月 27 15:15:01 localhost.localdomain dockerd[9737]: time="2025-08-27T15:15:01+08:00" level=info msg="Fir...ng" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost yywz]# /opt bash: /opt: 是一个目录 [root@localhost yywz]# cd /opt [root@localhost opt]# ll 总用量 178812 -rw-r--r--. 1 root root 129115976 3月 13 15:23 boot.bak0.bz2 drwxr-xr-x. 2 root root 4096 3月 17 17:08 bt drwx--x--x 4 root root 4096 8月 19 19:23 containerd drwxr-xr-x. 2 root root 4096 2月 23 2025 mysql drwxr-xr-x 6 prometheus prometheus 4096 8月 17 21:55 prometheus drwxr-xr-x. 2 root root 4096 10月 31 2018 rh -rw-------. 1 root root 53949998 7月 14 2023 VMwareTools-10.3.26-22085142.tar.gz drwxr-xr-x. 8 root root 4096 7月 14 2023 vmware-tools-distrib drwxr-xr-x. 2 root root 4096 3月 17 14:11 webmin [root@localhost opt]# mkdir /data mkdir: 无法创建目录"/data": 文件已存在 [root@localhost opt]# ls -l 总用量 178812 -rw-r--r--. 1 root root 129115976 3月 13 15:23 boot.bak0.bz2 drwxr-xr-x. 2 root root 4096 3月 17 17:08 bt drwx--x--x 4 root root 4096 8月 19 19:23 containerd drwxr-xr-x. 2 root root 4096 2月 23 2025 mysql drwxr-xr-x 6 prometheus prometheus 4096 8月 17 21:55 prometheus drwxr-xr-x. 2 root root 4096 10月 31 2018 rh -rw-------. 1 root root 53949998 7月 14 2023 VMwareTools-10.3.26-22085142.tar.gz drwxr-xr-x. 8 root root 4096 7月 14 2023 vmware-tools-distrib drwxr-xr-x. 2 root root 4096 3月 17 14:11 webmin [root@localhost opt]# cd /data [root@localhost data]# git clone https://gitee.com/inge365/docker-prometheus.git fatal: 目标路径 'docker-prometheus' 已经存在,并且不是一个空目录。 [root@localhost data]# cd /docker-prometheus bash: cd: /docker-prometheus: 没有那个文件或目录 [root@localhost data]# cd docker-prometheus/ [root@localhost docker-prometheus]# docker-compose up -d Pulling alertmanager (prom/alertmanager:v0.25.0)... v0.25.0: Pulling from prom/alertmanager b08a0a826235: Pull complete d71d159599c3: Pull complete 05d21abf0535: Pull complete c4dc43cc8685: Pull complete aff850a11e31: Pull complete 6c477a8cc220: Pull complete Digest: sha256:fd4d9a3dd1fd0125108417be21be917f19cc76262347086509a0d43f29b80e98 Status: Downloaded newer image for prom/alertmanager:v0.25.0 Pulling cadvisor (google/cadvisor:latest)... latest: Pulling from google/cadvisor ff3a5c916c92: Pull complete 44a45bb65cdf: Pull complete 0bbe1a2fe2a6: Pull complete Digest: sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 Status: Downloaded newer image for google/cadvisor:latest Pulling node_exporter (prom/node-exporter:v1.5.0)... v1.5.0: Pulling from prom/node-exporter 22b70bddd3ac: Pull complete 5c12815fee55: Pull complete c0e87333d380: Pull complete Digest: sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Status: Downloaded newer image for prom/node-exporter:v1.5.0 Pulling prometheus (prom/prometheus:v2.37.6)... v2.37.6: Pulling from prom/prometheus 4399114b4c59: Pull complete 225de5a6f1e7: Pull complete d4fec713b49e: Pull complete 7ae184732db2: Pull complete fee9b37b7eaa: Pull complete 7bc64fbe5ac4: Pull complete c5808d9b102a: Pull complete 25611bd629bf: Pull complete e30138ae4e40: Pull complete f68b4ae50d77: Pull complete a8143b4a94e9: Pull complete 72c09123b9ad: Pull complete Digest: sha256:92ceb93400dd4c887c76685d258bd75b9dcfe3419b71932821e9dcc70288d851 Status: Downloaded newer image for prom/prometheus:v2.37.6 Pulling grafana (grafana/grafana:9.4.3)... 9.4.3: Pulling from grafana/grafana 895e193edb51: Pull complete a3e3778621b5: Pull complete e7cf2c69b927: Pull complete df40c119df08: Pull complete 3b29ea6a27af: Pull complete 3997cd619520: Pull complete 7e759f975aac: Pull complete ff133072f235: Pull complete f9a56094a361: Pull complete Digest: sha256:76dcf36e7d2a4110c2387c1ad6e4641068dc78d7780da516d5d666d1e4623ac5 Status: Downloaded newer image for grafana/grafana:9.4.3 Creating node-exporter ... Creating node-exporter ... error Creating alertmanager ... WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ff6e14ace4a34f23421c16fb36497c92f47b4f6e68d3828dcb78f425f136bcec): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Creating alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Creating cadvisor ... done proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ff6e14ace4a34f23421c16fb36497c92f47b4f6e68d3828dcb78f425f136bcec): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (4c3949c57c7cd56926e99a458f84213040813ef5e50b04931f8e017814b69e6e): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# systemctl daemon-reload [root@localhost docker-prometheus]# [root@localhost docker-prometheus]# systemctl restart docker [root@localhost docker-prometheus]# systemctl stop firewalld [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... cadvisor is up-to-date Starting alertmanager ... Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (655a075b63ca30d8feb55e2af3a0d90588987435d9c9e32c6d9ee74cd6da8bd2): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9093 -j DNAT --to-destination 172.18.0.3:9093 ! -i br-a6445a378290: iptables: No chain/target/match by that name. Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (98bbc9308f4f561c30990777d9c07d253d0b3637ab40c49b9d3e5dd65e7ff2b3): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9100 -j DNAT --to-destination 172.18.0.4:9100 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (655a075b63ca30d8feb55e2af3a0d90588987435d9c9e32c6d9ee74cd6da8bd2): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9093 -j DNAT --to-destination 172.18.0.3:9093 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (98bbc9308f4f561c30990777d9c07d253d0b3637ab40c49b9d3e5dd65e7ff2b3): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 9100 -j DNAT --to-destination 172.18.0.4:9100 ! -i br-a6445a378290: iptables: No chain/target/match by that name. (exit status 1)) ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# sudo systemctl status dockerdocker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:33:16 CST; 9s ago Docs: https://docs.docker.com Main PID: 12886 (dockerd) Tasks: 50 Memory: 33.9M CGroup: /system.slice/docker.service ├─12886 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock ├─13070 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6379 -container-ip 172.19.0.2... ├─13078 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 6379 -container-ip 172.19.0.2 -con... ├─13119 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.19.0.3 -... └─13127 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 80 -container-ip 172.19.0.3 -conta... 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.743661358+08:00" level=in...rpc 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.743680704+08:00" level=in...rpc 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.763313066+08:00" level=in...y2" 8月 27 15:33:14 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:14.788019065+08:00" level=in...t." 8月 27 15:33:15 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:15.126541080+08:00" level=in...ss" 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.049410152+08:00" level=in...e." 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.087290259+08:00" level=in....24 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.087475600+08:00" level=in...on" 8月 27 15:33:16 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:33:16 localhost.localdomain dockerd[12886]: time="2025-08-27T15:33:16.115125408+08:00" level=in...ck" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost docker-prometheus]# sudo iptables -t nat -F [root@localhost docker-prometheus]# sudo iptables -t filter -F [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3daac002ffaa google/cadvisor:latest "/usr/bin/cadvisor -…" 4 minutes ago Up 16 seconds 8080/tcp cadvisor fd2c63d29ec1 nginx:alpine "/docker-entrypoint.…" 19 minutes ago Up 16 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp yywz_nginx_1 cd603ef0e887 redis:alpine "docker-entrypoint.s…" 19 minutes ago Up 16 seconds 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp yywz_redis_1 [root@localhost docker-prometheus]# docker rm fd2c63d29ec1 Error response from daemon: You cannot remove a running container fd2c63d29ec116234c94487a17d6ea75d784d0cfd22b7e0d467cbab518258347. Stop the container before attempting removal or force remove [root@localhost docker-prometheus]# docker stop fd2c63d29ec1 fd2c63d29ec1 [root@localhost docker-prometheus]# docker rm fd2c63d29ec1 fd2c63d29ec1 [root@localhost docker-prometheus]# docker stop cd603ef0e887 cd603ef0e887 [root@localhost docker-prometheus]# docker rm cd603ef0e887 cd603ef0e887 [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3daac002ffaa google/cadvisor:latest "/usr/bin/cadvisor -…" 6 minutes ago Up 2 minutes 8080/tcp cadvisor [root@localhost docker-prometheus]# docker stop 3daac002ffaa 3daac002ffaa [root@localhost docker-prometheus]# docker rm 3daac002ffaa 3daac002ffaa [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Starting node-exporter ... error proxy: listen tcp4 0.0.0.0:9093: bind: address already in use WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d918c84a550f7b946d78470572034217e4fb96b010e7e4af0b0999844abd017f): Error starting userl Creating cadvisor ... done ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a307f195dd99a59726f03ce9fd5b6ae8b2ae07e0551548c3247604349183d622): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d918c84a550f7b946d78470572034217e4fb96b010e7e4af0b0999844abd017f): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker-compose up -d cadvisor is up-to-date Starting node-exporter ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ae6db99f5f30989777bab25b89dbd620f2056a872aa85661bb8aad45032d5302): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (17965ccc2adb43b6158e5ddd99dd13ac17c869625c0daed3e6099ba7177f8650): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (ae6db99f5f30989777bab25b89dbd620f2056a872aa85661bb8aad45032d5302): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (17965ccc2adb43b6158e5ddd99dd13ac17c869625c0daed3e6099ba7177f8650): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker system prune -f Deleted Containers: 7f52ab012fe533dab192615c8d057fbc3c9305774241bdf3c49d226b858d6523 8b45998c395e05165e680ee482791a34a6287da38f18afe9c748d60e0600c45a Deleted Networks: yywz_default Total reclaimed space: 0B [root@localhost docker-prometheus]# docker-compose down Stopping cadvisor ... done Removing cadvisor ... done Removing network docker-prometheus_monitoring [root@localhost docker-prometheus]# docker-compose up -d Creating network "docker-prometheus_monitoring" with driver "bridge" Creating node-exporter ... Creating node-exporter ... error Creating cadvisor ... WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f0abf54c178140c02db6497198b8cdd574c77323fdce7217292efd2df1b09080): Error starting userl Creating alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (b86c6f893b30deb2e189aba9a1a9ca972401f43bba55d8e4eb8796780e658bc1): Error starting userland Creating cadvisor ... done ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f0abf54c178140c02db6497198b8cdd574c77323fdce7217292efd2df1b09080): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (b86c6f893b30deb2e189aba9a1a9ca972401f43bba55d8e4eb8796780e658bc1): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker-compose logs alertmanager Attaching to alertmanager [root@localhost docker-prometheus]# docker-compose logs node-exporter ERROR: No such service: node-exporter [root@localhost docker-prometheus]# systemctl daemon-reload [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# sudo systemctl status dockerdocker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─timeout.conf Active: active (running) since 三 2025-08-27 15:40:36 CST; 10s ago Docs: https://docs.docker.com Main PID: 16222 (dockerd) Tasks: 14 Memory: 27.6M CGroup: /system.slice/docker.service └─16222 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.582025207+08:00" level=in...rpc 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.582038622+08:00" level=in...rpc 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.599443534+08:00" level=in...y2" 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.611122282+08:00" level=in...t." 8月 27 15:40:35 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:35.853104234+08:00" level=in...ss" 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.461289460+08:00" level=in...e." 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.495281733+08:00" level=in....24 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.495461424+08:00" level=in...on" 8月 27 15:40:36 localhost.localdomain systemd[1]: Started Docker Application Container Engine. 8月 27 15:40:36 localhost.localdomain dockerd[16222]: time="2025-08-27T15:40:36.527032657+08:00" level=in...ck" Hint: Some lines were ellipsized, use -l to show in full. [root@localhost docker-prometheus]# docker-compose up -d Starting alertmanager ... Starting node-exporter ... Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (8417fe6ebd4c207f2db52a925bdd7f5924c9d48f1f15d67907df06996b445fdf): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use Starting node-exporter ... error ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (8641cb448c49665ec95b7dcbdc39c78cf0f47f54ad56a6da4fac5898c1778489): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (8417fe6ebd4c207f2db52a925bdd7f5924c9d48f1f15d67907df06996b445fdf): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (8641cb448c49665ec95b7dcbdc39c78cf0f47f54ad56a6da4fac5898c1778489): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# vim docker-compose.yml [root@localhost docker-prometheus]# sudo lsof -i :9093 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME alertmana 1161 prometheus 7u IPv6 31306 0t0 TCP *:copycat (LISTEN) [root@localhost docker-prometheus]# docker ps --format "table {{.Names}}\t{{.Ports}}" NAMES PORTS cadvisor 8080/tcp [root@localhost docker-prometheus]# docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" | grep -E "(9093|9100)" [root@localhost docker-prometheus]# docker-compose up -d Starting alertmanager ... cadvisor is up-to-date Starting alertmanager ... error WARNING: Host is already in use by another container ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (c9acbc4520a2f9ab5b40b5936c3b61071c8acb445dbd858c2b126d3dcf9e101f): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use Starting node-exporter ... error ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f46135cd7dc0aa34069a7b825c8ad09a691c6a40f429f7714a2f052f14d348d6): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (c9acbc4520a2f9ab5b40b5936c3b61071c8acb445dbd858c2b126d3dcf9e101f): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (f46135cd7dc0aa34069a7b825c8ad09a691c6a40f429f7714a2f052f14d348d6): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo lsof -ti:9093 1161 [root@localhost docker-prometheus]# sudo lsof -ti:9100 714 1167 [root@localhost docker-prometheus]# udo kill -9 714 bash: udo: 未找到命令... [root@localhost docker-prometheus]# udo kill -9 <714> bash: 未预期的符号 `714' 附近有语法错误 [root@localhost docker-prometheus]# sudo kill -9 <1167> bash: 未预期的符号 `1167' 附近有语法错误 [root@localhost docker-prometheus]# sudo kill -9 1167 [root@localhost docker-prometheus]# sudo kill -9 1161 [root@localhost docker-prometheus]# sudo kill -9 714 [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (a1e70481732ff4211fb0830d456aba632427d7edbf1aed89182b54a2dc1ac0af): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (d7a7bb69496f07386701cc39d4f9da755beb85a528753132c9a70048af1c917c): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (a1e70481732ff4211fb0830d456aba632427d7edbf1aed89182b54a2dc1ac0af): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (d7a7bb69496f07386701cc39d4f9da755beb85a528753132c9a70048af1c917c): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo systemctl restart docker [root@localhost docker-prometheus]# docker-compose up -d Starting node-exporter ... Starting alertmanager ... Starting node-exporter ... error WARNING: Host is already in use by another container ERROR: for node-exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d4bd8914814c09ec94f19902075b8a0b7c2f39feba0a2efc1c6018e52fb01061): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use Starting alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a1f8e19b7f96a87e287d4d2b33ce00e81a5d5c0d1bc2192c17f9bf39ace81f16): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: for node_exporter Cannot start service node_exporter: driver failed programming external connectivity on endpoint node-exporter (d4bd8914814c09ec94f19902075b8a0b7c2f39feba0a2efc1c6018e52fb01061): Error starting userland proxy: listen tcp4 0.0.0.0:9100: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (a1f8e19b7f96a87e287d4d2b33ce00e81a5d5c0d1bc2192c17f9bf39ace81f16): Error starting userland proxy: listen tcp4 0.0.0.0:9093: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# cd /opt [root@localhost opt]# cd /data [root@localhost data]# ks bash: ks: 未找到命令... [root@localhost data]# ls docker-prometheus [root@localhost data]# ls -l 总用量 4 drwxr-xr-x 6 root root 4096 8月 27 15:42 docker-prometheus [root@localhost data]# cd /opt [root@localhost opt]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost opt]# cd docker-compose.yml bash: cd: docker-compose.yml: 没有那个文件或目录 [root@localhost opt]# find docker-compose.yml find:docker-compose.yml’: 没有那个文件或目录 [root@localhost opt]# cd /data/docker-prometheus/ [root@localhost docker-prometheus]# ls l ls: 无法访问l: 没有那个文件或目录 [root@localhost docker-prometheus]# ls -l 总用量 52 drwxr-xr-x 2 root root 4096 8月 20 15:12 alertmanager -rw-r--r-- 1 root root 2634 8月 20 14:36 docker-compose.yaml drwxr-xr-x 2 root root 4096 8月 20 14:36 grafana -rw-r--r-- 1 root root 35181 8月 20 14:36 LICENSE drwxr-xr-x 2 root root 4096 8月 22 16:45 prometheus -rw-r--r-- 1 root root 0 8月 20 14:36 README.md [root@localhost docker-prometheus]# vim docker-compose.yaml [root@localhost docker-prometheus]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost docker-prometheus]# docker-compose.yml bash: docker-compose.yml: 未找到命令... [root@localhost docker-prometheus]# sudo systemctl restart docker ^[[A[root@localhost docker-prometheudocker-compose up -d Recreating node-exporter ... Recreating alertmanager ... cadvisor is up-to-date Recreating alertmanager ... error ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on e Recreating node-exporter ... done proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (845ba9f38bf09748f96a5e67761382648b7a08f0d1ca4aea45e9d486034ee09f): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker-compose up -d Removing alertmanager node-exporter is up-to-date Recreating 53b2433d3f44_alertmanager ... cadvisor is up-to-date Recreating 53b2433d3f44_alertmanager ... error ERROR: for 53b2433d3f44_alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (9f74c9d6ee9219aa21f3a075ee643a8da9a3ee0a7cad5d8fc5e7497c5784c400): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (9f74c9d6ee9219aa21f3a075ee643a8da9a3ee0a7cad5d8fc5e7497c5784c400): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# sudo lsof -i :9093 || echo "端口 9093 已释放" COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME alertmana 17952 prometheus 7u IPv6 177546 0t0 TCP *:copycat (LISTEN) [root@localhost docker-prometheus]# sudo lsof -i :9100 || echo "端口 9100 已释放" COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME prometheu 17933 prometheus 30u IPv6 179703 0t0 TCP localhost:54442->localhost:jetdirect (ESTABLISHED) node_expo 17972 prometheus 3u IPv6 177682 0t0 TCP *:jetdirect (LISTEN) node_expo 17972 prometheus 6u IPv6 177755 0t0 TCP localhost:jetdirect->localhost:54442 (ESTABLISHED) [root@localhost docker-prometheus]# sudo iptables -t nat -L -n | grep -E "(9093|9100)" MASQUERADE tcp -- 172.18.0.3 172.18.0.3 tcp dpt:9100 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:9101 to:172.18.0.3:9100 [root@localhost docker-prometheus]# docker-compose down Stopping node-exporter ... done Stopping cadvisor ... done Removing alertmanager ... done Removing node-exporter ... done Removing cadvisor ... done Removing 53b2433d3f44_alertmanager ... done Removing network docker-prometheus_monitoring [root@localhost docker-prometheus]# docker-compose up -d Creating network "docker-prometheus_monitoring" with driver "bridge" Creating node-exporter ... Creating cadvisor ... Creating alertmanager ... Creating alertmanager ... error Creating node-exporter ... done Creating cadvisor ... done proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: for alertmanager Cannot start service alertmanager: driver failed programming external connectivity on endpoint alertmanager (fc63ea1fc78609d94171482fa5d1a2fb2c3ba3ed83c2d8088806b8a5613cb2ac): Error starting userland proxy: listen tcp4 0.0.0.0:9094: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# docker ps --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" | grep 9094 [root@localhost docker-prometheus]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ede1cff2fba0 google/cadvisor:latest "/usr/bin/cadvisor -…" 56 seconds ago Up 55 seconds 8080/tcp cadvisor 1a149cda0ce5 prom/node-exporter:v1.5.0 "/bin/node_exporter …" 56 seconds ago Up 55 seconds 0.0.0.0:9101->9100/tcp, :::9101->9100/tcp node-exporter [root@localhost docker-prometheus]# vim docker-compose.yaml [root@localhost docker-prometheus]# docker-compose up -d Recreating alertmanager ... node-exporter is up-to-date Recreating alertmanager ... done Creating prometheus ... Creating prometheus ... error ERROR: for prometheus Cannot start service prometheus: driver failed programming external connectivity on endpoint prometheus (ddd41fce73c70167f23ff37ff3001e101134785f9423893a5aae147768420d6a): Error starting userland proxy: listen tcp4 0.0.0.0:9090: bind: address already in use ERROR: for prometheus Cannot start service prometheus: driver failed programming external connectivity on endpoint prometheus (ddd41fce73c70167f23ff37ff3001e101134785f9423893a5aae147768420d6a): Error starting userland proxy: listen tcp4 0.0.0.0:9090: bind: address already in use ERROR: Encountered errors while bringing up the project. [root@localhost docker-prometheus]# vim docker-compose.yaml version: '3.3' volumes: prometheus_data: {} grafana_data: {} networks: monitoring: driver: bridge services: prometheus: image: prom/prometheus:v2.37.6 container_name: prometheus restart: always volumes: - /etc/localtime:/etc/localtime:ro - ./prometheus/:/etc/prometheus/ - prometheus_data:/prometheus command: - '--config.file=/etc/prometheus/prometheus.yml' - '--storage.tsdb.path=/prometheus' - '--web.console.libraries=/usr/share/prometheus/console_libraries' - '--web.console.templates=/usr/share/prometheus/consoles' #热加载配置 - '--web.enable-lifecycle' #api配置 #- '--web.enable-admin-api' #历史数据最大保留时间,默认15天 - '--storage.tsdb.retention.time=30d' networks: - monitoring links: - alertmanager 34,6 顶端 怎么该
最新发布
08-28
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值