【已解决】CentOS 7-无法重启网络,systemctl status network.service报错:code=exited, status=1/FAILURE

本文详细介绍了在CentOS 7.5环境下,通过VMware虚拟机进行网络配置的过程及遇到的问题。针对重启网络服务失败的情况,提供了具体的错误信息分析,并给出了包括检查虚拟机网络适配器设置、修改ifcfg-ens-xx文件在内的解决方案。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

环境

  • 宿主机:Windows 10 64_bit
  • 虚拟机:VMware 12
    • CentOS 7.5 64_bit

正文

[root@slave1 cyg]# service network restart
Restarting network (via systemctl):  Job for network.service failed because the control process exited with error code. See "systemctl status network.service" and "journalctl -xe" for details.
                                                           [FAILED]
[root@slave1 cyg]# systemctl status network.service
● network.service - LSB: Bring up/down networking
   Loaded: loaded (/etc/rc.d/init.d/network; bad; vendor preset: disabled)
   Active: failed (Result: exit-code) since Tue 2019-07-30 10:52:23 CST; 25s ago
     Docs: man:systemd-sysv-generator(8)
  Process: 17266 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)

Jul 30 10:52:23 slave1 network[17266]: RTNETLINK answers: File exists
Jul 30 10:52:23 slave1 network[17266]: RTNETLINK answers: File exists
Jul 30 10:52:23 slave1 network[17266]: RTNETLINK answers: File exists
Jul 30 10:52:23 slave1 network[17266]: RTNETLINK answers: File exists
Jul 30 10:52:23 slave1 network[17266]: RTNETLINK answers: File exists
Jul 30 10:52:23 slave1 network[17266]: RTNETLINK answers: File exists
Jul 30 10:52:23 slave1 systemd[1]: network.service: control process exited, code=...s=1
Jul 30 10:52:23 slave1 systemd[1]: Failed to start LSB: Bring up/down networking.
Jul 30 10:52:23 slave1 systemd[1]: Unit network.service entered failed state.
Jul 30 10:52:23 slave1 systemd[1]: network.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@slave1 cyg]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0c:29:dc:e2:aa brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:d7:c9:33 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:d7:c9:33 brd ff:ff:ff:ff:ff:ff

前提确保:
(1)虚拟机设置-网络适配器-配置正确。【我这里是NAT模式】

可重新设置:虚拟机设置-网络适配器,移除,重新添加(NAT模式)。

(2)/etc/sysconfig/network-scripts/ifcfg-ens-xx文件,配置正确。

这个ifcfg-ens-xx中的xx,以ip addr显示的为准,在此是ifcfg-ens33

[root@slave1 zookeeper-3.4.5]# ip addr
....
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:dc:e2:aa brd ff:ff:ff:ff:ff:ff
    inet 192.168.172.11/24 brd 192.168.172.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fedc:e2aa/64 scope link 
       valid_lft forever preferred_lft forever
...

例如:

[root@slave1 zookeeper-3.4.5]# cat /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
IPADDR=192.168.172.11
NETMASK=255.255.255.0
GATEWAY=192.168.172.2
DNS1=114.114.114.114
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="d11d791b-4b2a-4f8d-b2a3-32b0a8d281d6"
DEVICE="ens33"
HWADDR="00:0c:29:dc:e2:aa"
ONBOOT="yes"

解决:

停止网络管理

systemctl stop NetworkManager

重启网络:

systemctl start network.service

查看网络状态:

systemctl status network.service

有时,重新配置:虚拟机设置-网络适配器,即 移除、重新添加就好了

OK~

sudo systemctl restart docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. [root@192 ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since 三 2025-03-19 14:28:02 CST; 13s ago Docs: http://docs.docker.com Process: 3554 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE) Main PID: 3554 (code=exited, status=1/FAILURE) 3月 19 14:28:02 192.168.137.128 systemd[1]: Starting Docker Application Container..... 3月 19 14:28:02 192.168.137.128 dockerd-current[3554]: unable to configure the Doc...l 3月 19 14:28:02 192.168.137.128 systemd[1]: docker.service: main process exited, ...RE 3月 19 14:28:02 192.168.137.128 systemd[1]: Failed to start Docker Application Co...e. 3月 19 14:28:02 192.168.137.128 systemd[1]: Unit docker.service entered failed state. 3月 19 14:28:02 192.168.137.128 systemd[1]: docker.service failed. Hint: Some lines were ellipsized, use -l to show in full. [root@192 ~]# sudo vim /etc/docker/daemon.json [root@192 ~]# sudo systemctl daemon-reload [root@192 ~]# sudo systemctl restart docker Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details. [root@192 ~]# docker -v
最新发布
03-20
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值