【XHashx运维笔记】1+X初级B卷

1+x

xserver1的IP地址:192.168.100.11

xserver2的IP地址:192.168.100.12

服务器IP请以考试为准

网络管理(70分)

在eNSP中使用S5700交换机进行配置,通过一条命令划分vlan 2、vlan 3、vlan 1004,通过端口组的方式配置端口1-5为access模式,并添加至vlan2中。配置端口10为trunk模式,并放行vlan3。创建三层vlan 2,配置IP地址为:172.16.2.1/24,创建三层vlan1004,配置IP地址为:192.168.4.2/30。通过命令添加默认路由,下一跳为192.168.4.1。(使用完整命令)

本地yum源管理(20分)

使用VMWare软件启动提供的xserver1虚拟机(配置虚拟机xserver1的IP为192.168.100.11,主机名为xserver1),在虚拟机的/root目录下,存在一个CentOS-7-x86_64-DVD-1511.iso的镜像文件,使用这个镜像文件配置本地yum源,要求将这个镜像文件挂载在/opt/centos目录,请问如何配置自己的local.repo文件,使得可以使用该镜像中的软件包,安装软件。请将local.repo文件的内容以文本形式提交到答题框。

1、配置虚拟机xserver1的IP为192.168.100.11

(1)先查看虚拟机设置中,哪个是被设置为仅主机模式的网络适配器
在这里插入图片描述
(2)通过ip addr命令查看xserver2中的网络适配器,由此可以判断需要修改IP的网络适配器为eno16777736(根据网络适配器排序)
在这里插入图片描述
(3)使用vi /etc/sysconfig/network-scripts/ifcfg-eno16777736命令编辑网络适配器配置文件,然后按esc键退出到命令模式,使用:wq!保存修改后的文件
在这里插入图片描述
(4)然后使用systemctl restart network命令重启网络适配器,使配置生效,然后就可以使用终端软件进行连接
在这里插入图片描述
2、配置主机名为xserver1

[root@localhost ~]# hostnamectl set-hostname xserver1
[root@localhost ~]# bash
[root@xserver1 ~]# 

3、创建/opt/centos目录

[root@xserver1 ~]# mkdir /opt/centos

4、挂载root目录下的centos镜像文件

[root@xserver1 ~]# mount CentOS-7-x86_64-DVD-1511.iso /opt/centos/
mount: /dev/loop0 is write-protected, mounting read-only

5、配置yum源(本地)

(1)由于考试可能不能联网,所以可事先将/etc/yum.repos.d目录下CentOS开头的repo文件(此类文件为系统安装自动生成,需要联网才可以使用yum仓库)移动到备份目录,方便后续的yum配置校验和软件安装

[root@xserver1 ~]# mkdir yum.bak
[root@xserver1 ~]# mv /etc/yum.repos.d/Cent* yum.bak/

(2)在/etc/yum.repo.d/目录下创建文件local.repo

[root@xserver1 ~]# vim /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
enabled=1
gpgcheck=0

(3)验证yum源是否可用

[root@xserver1 ~]# yum repolist all

在这里插入图片描述

FTP安装使用(20分)

使用xserver1虚拟机,安装ftp服务,并配置ftp的共享目录为/opt。使用VMWare软件继续启动提供的xserver2虚拟机(配置虚拟机xserver2的IP为192.168.100.12,主机名为xserver2),并创建该虚拟机的yum源文件ftp.repo使用xserver1的ftp源(配置文件中的FTP地址使用主机名)。配置完成后,将xserver2节点的ftp.repo文件以文本形式提交到答题框。

1、安装ftp服务(vsftpd)

[root@xserver1 ~]# yum -y install vsftpd

2、在xserver1上配置ftp服务

(1)配置ftp服务的共享目录为/opt,修改vsftpd.conf配置文件,添加anon_root字段

[root@xserver1 ~]# vim /etc/vsftpd/vsftpd.conf

在这里插入图片描述

(2)重启vsftpd服务

[root@xserver1 ~]# systemctl restart vsftpd

(3)将selinux模式永久设置为permissive,停止防火墙并配置开机不自启

[root@xserver1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@xserver1 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.

(4)通过资源管理器的地址栏输入ftp://192.168.100.11访问ftp服务,验证ftp服务是否配置正确
在这里插入图片描述
3、配置虚拟机xserver2的IP为192.168.100.12,主机名为xserver2(方法同xserver1)

4、配置xserver2上的/etc/hosts文件,使服务器支持对xserver1和xserver2的域名解析

[root@xserver2 ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.11 xserver1    //添加xserver1的主机映射
192.168.100.12 xserver2    //添加xserver2的主机映射

5、通过scp命令将xserver2的hosts文件拷贝到xserver1,覆盖xserver1原有的hosts文件,验证双方是否可以ping通彼此的主机名

[root@xserver2 ~]# scp /etc/hosts root@192.168.100.11:/etc/hosts
The authenticity of host '192.168.100.11 (192.168.100.11)' can't be established.
ECDSA key fingerprint is 28:dd:fc:84:89:ec:c0:cf:fb:8b:0a:92:9e:0f:9f:73.
Are you sure you want to continue connecting (yes/no)? yes
root@192.168.100.11's password: 
hosts                                                     100%  206     0.2KB/s   00:00  

在这里插入图片描述
在这里插入图片描述

6、配置yum源(ftp)

(1)事先将/etc/yum.repos.d目录下CentOS开头的repo文件移动到备份目录

[root@xserver2 ~]# mkdir yum.bak
[root@xserver2 ~]# mv /etc/yum.repos.d/Cent* yum.bak/

(2)在/etc/yum.repo.d/目录下创建文件ftp.repo

[root@xserver2 ~]# vi /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://xserver1/centos
enabled=1
gpgcheck=0

(3)验证yum源是否可用
在这里插入图片描述

Samba管理(30分)

使用xserver1虚拟机,安装Samba服务所需要的软件包,将xserver1节点中的/opt/share目录使用Samba服务共享出来(目录不存在请自行创建)。操作完毕后,将xserver1节点Samba配置文件中的[share]段落和执行netstat -ntpl命令的返回结果以文本形式提交到答题框。
1、在xserver1上安装samba服务

[root@xserver1 ~]# yum -y install samba

2、修改samba配置文件/etc/samba/smb.conf

(1)修改[global]的内容,如果存在对应参数就修改,不存在就删除

        load printers = no
        disable spoolss = yes

(2)添加以下内容

[share]
        browseable = yes
        path = /opt/share
        public = yes
        writable = yes

3、创建/opt/share共享目录,并赋予所有用户访问权限

[root@xserver1 ~]# mkdir /opt/share
[root@xserver1 ~]# chmod 777 /opt/share

4、启动smb和nmb服务并设置开机自启

[root@xserver1 ~]# systemctl start smb && systemctl enable smb
[root@xserver1 ~]# systemctl start nmb && systemctl enable nmb

5、检查smb端口开启状态

[root@xserver1 ~]# netstat -ntpl | grep smb
tcp        0      0 0.0.0.0:139             0.0.0.0:*               LISTEN      12233/smbd
tcp        0      0 0.0.0.0:445             0.0.0.0:*               LISTEN      12233/smbd
tcp6       0      0 :::139                  :::*                    LISTEN      12233/smbd
tcp6       0      0 :::445                  :::*                    LISTEN      12233/smbd

6、创建Samba用户(用户必须为系统存在用户,以root用户为例),并重启smb服务

[root@xserver1 ~]# smbpasswd -a root
New SMB password:
Retype new SMB password:
Added user root.
[root@xserver1 ~]# systemctl restart smb

主从数据库管理(40分)

在xserver1、xserver2上安装mariadb数据库,并配置为主从数据库(xserver1为主节点、xserver2为从节点),实现两个数据库的主从同步。配置完毕后,请在xserver2上的数据库中执行“show slave status \G”命令查询从节点复制状态,将查询到的结果以文本形式提交到答题框。

1、在xserver1、xserver2上安装mariadb数据库

[root@xserver1 ~]# yum -y install mariadb mariadb-server
[root@xserver2 ~]# yum -y install mariadb mariadb-server

2、在xserver1、xserver2上启动数据库服务并设置开机自启

[root@xserver1 ~]# systemctl start mariadb && systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@xserver2 ~]# systemctl start mariadb && systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.

3、在xserver1上配置主数据库

(1)初始化数据库

[root@xserver1 ~]# mysql_secure_installation 
/usr/bin/mysql_secure_installation: line 379: find_mysql_client: command not found

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):     //默认为none,直接回车
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y    //输入y或直接回车,设置数据库账户root的密码
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y    //输入y或直接回车,删除匿名用户
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n    //输入n,允许root账户远程登陆
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y    //输入y或直接回车,删除默认的测试数据库,取消测试数据库的一系列访问权限
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y    //输入y或直接回车,刷新授权列表,让初始化的设定立即生效。
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

(2)修改数据库配置文件/etc/my.cnf

[root@xserver1 ~]# vim /etc/my.cnf
[mysqld]
log_bin=mysql-bin    //启动日志记录功能
binlog_ignore_db=mysql    //不同步mysql系统数据库
server_id=11    //数据库集群唯一ID,一般取服务器IP地址的最后一段
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd

[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

在这里插入图片描述

(3)重启数据库服务,登陆数据库进行配置

[root@xserver1 ~]# systemctl restart mariadb
[root@xserver1 ~]# mysql -uroot -p000000    //-u后接数据库账户名,-p后接登陆密码
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 10
Server version: 5.5.44-MariaDB MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> 

(4)为在任意IP上登陆访问数据库的root用户授予所有权限

MariaDB [(none)]> grant all privileges on *.* to root@'%' identified by '000000';
Query OK, 0 rows affected (0.00 sec)

(5)创建一个用于连接从节点的用户user(该用户只能在从节点登陆访问主节点数据库),密码为000000,并授予其从节点数据库同步主节点数据库的权限

MariaDB [(none)]> grant replication slave on *.* to 'user'@'xserver2' identified by '000000';
Query OK, 0 rows affected (0.00 sec)

4、在xserver2上配置从数据库

(1)初始化数据库(同xserver1)

[root@xserver2 ~]# mysql_secure_installation 
/usr/bin/mysql_secure_installation: line 379: find_mysql_client: command not found

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] 
New password: 
Re-enter new password: 
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] 
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] n
 ... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] 
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] 
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

(2)修改数据库配置文件/etc/my.cnf

[root@xserver2 ~]# vim /etc/my.cnf
[mysqld]
log_bin=mysql-bin
binlog_ignore_db=mysql
server_id=12
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd

[mysqld_safe]
log-error=/var/log/mariadb/mariadb.log
pid-file=/var/run/mariadb/mariadb.pid

#
# include all files from the config directory
#
!includedir /etc/my.cnf.d

(3)重启数据库服务,登陆数据库进行配置

[root@xserver2 ~]# mysql -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 2
Server version: 5.5.44-MariaDB-log MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> 

(4)配置从节点连接主节点的连接参数。 master_host 为主节点主机名xserver1,master_user为主节点数据库中创建的用户user,master_password为user的密码

MariaDB [(none)]> change master to master_host='xserver1',master_user='user',master_password='000000';
Query OK, 0 rows affected (0.07 sec)

(5)开启从节点服务

MariaDB [(none)]> start slave;
Query OK, 0 rows affected (0.01 sec)

5、验证主从数据库同步是否配置正确(Slave_IO_Running和Slave_SQL_Running都为Yes)

MariaDB [(none)]> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: xserver1
                  Master_User: user
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 394
               Relay_Log_File: mariadb-relay-bin.000002
                Relay_Log_Pos: 678
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes    //Slave_IO_Running状态为Yes,即为配置成功
            Slave_SQL_Running: Yes    //Slave_SQL_Running状态为Yes,即为配置成功
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 0
                   Last_Error: 
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 394
              Relay_Log_Space: 974
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 0
               Last_SQL_Error: 
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 11
1 row in set (0.00 sec)

ERROR: No query specified

LNMP环境部署(40分)

使用xserver1节点,安装单节点lnmp环境。安装lnmp环境需要用到的yum源为CentOS-7-x86_64-DVD-1511.iso和lnmp目录(均在/root目录下)。安装并配置完lnmp环境后。依次查询数据库、nginx、php服务的状态,并使用netstat -ntpl命令查看端口开放情况。最后依次将查询服务状态的返回结果,和查看端口开放情况的返回结果以文本形式提交到答题框。

1、配置用于安装lnmp的yum源

(1)在/etc/yum.repo.d/目录下创建文件lnmp.repo(lnmp在/root目录下,所以路径使用/root/lnmp)

[root@xserver1 ~]# vim /etc/yum.repos.d/lnmp.repo
[lnmp]
name=lnmp
baseurl=file:///root/lnmp
enabled=1
gpgcheck=0

(2)验证yum源是否可用

[root@xserver1 ~]# yum repolist all

在这里插入图片描述

2、安装nginx和php环境

[root@xserver1 ~]# yum -y install nginx
[root@xserver1 ~]# yum -y install php* --disablerepo=centos --enablerepo=lnmp    //指定安装lnmp库的php软件包,disablerepo选项的作用是不使用特定仓库,enablerepo选项的作用是使用特定仓库

3、配置nginx支持php,修改nginx配置文件/etc/nginx/conf.d/default.conf

[root@xserver1 ~]# vim /etc/nginx/conf.d/default.conf

(1)设置首页支持php

    location / {
        root   /usr/share/nginx/html;
        index  index.php index.html index.htm;
    }

(2)添加一下内容,配置nginx支持php文件的解析

    location ~ \.php$ {
        root           /usr/share/nginx/html;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  /usr/share/nginx/html/$fastcgi_script_name;
    #   fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }

在这里插入图片描述

4、启动nginx和php服务并设置开机自启

[root@xserver1 ~]# systemctl start nginx && systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@xserver1 ~]# systemctl start php-fpm && systemctl enable php-fpm
Created symlink from /etc/systemd/system/multi-user.target.wants/php-fpm.service to /usr/lib/systemd/system/php-fpm.service.

5、验证lnmp环境是否部署成功

(1)在/usr/share/nginx/html下创建文件index.php

[root@xserver1 ~]# vim /usr/share/nginx/html/index.php
<?php echo phpinfo(); ?>

(2)在浏览器的地址栏输入http://192.168.100.11访问nginx主页
在这里插入图片描述

部署WordPress应用(30分)

使用xserver1节点,基于lnmp环境,部署WordPress应用(WordPress源码包在/root目录下)。应用部署完毕后,设置WordPress的站点标题为自己的姓名(例:名字叫张三,则设置站点标题为张三的BLOG),设置完毕后登录WordPresss首页。最后将命令curl ip(ip为wordpress的首页ip)的返回结果以文本形式提交到答题框。

1、配置Wordpress数据库

(1)登陆mariadb,创建wordpress数据库

[root@xserver1 ~]# mysql -uroot -p000000
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 5.5.44-MariaDB-log MariaDB Server

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> 
MariaDB [(none)]> create database wordpress;
Query OK, 1 row affected (0.08 sec)

MariaDB [(none)]> show databases;    //检查是否创建成功
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| wordpress          |
+--------------------+
4 rows in set (0.09 sec)

(2)赋予root用户本地和远程访问数据库的所有权限(with grant option:root用户可以将自己拥有的权限授权给别人)

MariaDB [(none)]> grant all privileges on *.* to root@localhost identified by '000000' with grant option;
Query OK, 0 rows affected (0.13 sec)

MariaDB [(none)]> grant all privileges on *.* to root@'%' identified by '000000' with grant option;
Query OK, 0 rows affected (0.00 sec)

2、下载unzip,用于解压/root目录下的wordpress压缩包

[root@xserver1 ~]# yum -y install unzip
[root@xserver1 ~]# unzip wordpress-4.7.3-zh_CN.zip

3、删除/usr/share/nginx/html目录下的所有文件和目录,并将wordpress目录下的所有文件和目录复制到/usr/share/nginx/html目录下

[root@xserver1 html]# rm -rf /usr/share/nginx/html/*
[root@xserver1 ~]# cp -rf wordpress/* /usr/share/nginx/html/
[root@xserver1 ~]# ls /usr/share/nginx/html/
index.php        wp-admin              wp-content         wp-load.php      wp-signup.php
license.txt      wp-blog-header.php    wp-cron.php        wp-login.php     wp-trackback.php
readme.html      wp-comments-post.php  wp-includes        wp-mail.php      xmlrpc.php
wp-activate.php  wp-config-sample.php  wp-links-opml.php  wp-settings.php

4、为/usr/share/nginx/html目录下的文件和目录赋予所有权限

[root@xserver1 ~]# chmod 777 /usr/share/nginx/html/*
[root@xserver1 ~]# ll /usr/share/nginx/html/
total 184
-rwxrwxrwx.  1 root root   418 Nov 29 17:46 index.php
-rwxrwxrwx.  1 root root 19935 Nov 29 17:46 license.txt
-rwxrwxrwx.  1 root root  6956 Nov 29 17:46 readme.html
-rwxrwxrwx.  1 root root  5447 Nov 29 17:46 wp-activate.php
drwxrwxrwx.  9 root root  4096 Nov 29 17:46 wp-admin
-rwxrwxrwx.  1 root root   364 Nov 29 17:46 wp-blog-header.php
-rwxrwxrwx.  1 root root  1627 Nov 29 17:46 wp-comments-post.php
-rwxrwxrwx.  1 root root  2930 Nov 29 17:46 wp-config-sample.php
drwxrwxrwx.  5 root root    65 Nov 29 17:46 wp-content
-rwxrwxrwx.  1 root root  3286 Nov 29 17:46 wp-cron.php
drwxrwxrwx. 18 root root  8192 Nov 29 17:46 wp-includes
-rwxrwxrwx.  1 root root  2422 Nov 29 17:46 wp-links-opml.php
-rwxrwxrwx.  1 root root  3301 Nov 29 17:46 wp-load.php
-rwxrwxrwx.  1 root root 33939 Nov 29 17:46 wp-login.php
-rwxrwxrwx.  1 root root  8048 Nov 29 17:46 wp-mail.php
-rwxrwxrwx.  1 root root 16250 Nov 29 17:46 wp-settings.php
-rwxrwxrwx.  1 root root 29896 Nov 29 17:46 wp-signup.php
-rwxrwxrwx.  1 root root  4513 Nov 29 17:46 wp-trackback.php
-rwxrwxrwx.  1 root root  3065 Nov 29 17:46 xmlrpc.php

5、配置php连接数据库文件wp-config.php,复制模板文件wp-config-sample.php,将其改名wp-config.php,并编辑该文件

[root@xserver1 html]# cp wp-config-sample.php wp-config.php
[root@xserver1 html]# vim wp-config.php 
// ** MySQL 设置 - 具体信息来自您正在使用的主机 ** //
/** WordPress数据库的名称 */
define('DB_NAME', 'wordpress');

/** MySQL数据库用户名 */
define('DB_USER', 'root');

/** MySQL数据库密码 */
define('DB_PASSWORD', '000000');

在这里插入图片描述

6、在浏览器的地址栏输入http://192.168.100.11访问wordpress安装主页,输入相应信息进行安装
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

Linux存储LVM管理(30分)

使用xserver1虚拟机,使用VMWare软件自行添加一块大小为20G的硬盘,使用fdisk命令对该硬盘进行分区,要求分出两个大小为5G的分区。使用两个分区,创建名xcloudvg的卷组并指定PE大小为16 MB。将执行vgdisplay命令的返回结果以文本形式提交到答题框。

1、使用VMWare软件自行添加一块大小为20G的硬盘,重启xserver1,使其识别新硬盘
在这里插入图片描述

image-20201205090127713

image-20201205090152240

image-20201205090219920

image-20201205090259309

image-20201205090319250

image-20201205090358873

image-20201205090558039

2、使用fdisk命令对/dev/sdb进行分区,分出两个大小为5G分区

[root@xserver1 ~]# fdisk /dev/sdb 
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xaac739d4.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): 
Using default response p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-41943039, default 41943039): +5G
Partition 1 of type Linux and of size 5 GiB is set

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): 
Using default response p
Partition number (2-4, default 2): 
First sector (10487808-41943039, default 10487808): 
Using default value 10487808
Last sector, +sectors or +size{K,M,G} (10487808-41943039, default 41943039): +5G
Partition 2 of type Linux and of size 5 GiB is set

Command (m for help): p

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xaac739d4

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    10487807     5242880   83  Linux
/dev/sdb2        10487808    20973567     5242880   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

3、创建物理卷

[root@xserver1 ~]# pvcreate /dev/sdb[1-2]
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created

4、创建卷组xcloudvg,并指定PE大小为16 MB(使用选项**-s**指定PE大小)

[root@xserver1 ~]# vgcreate xcloudvg -s 16 /dev/sdb[1-2]
  Volume group "xcloudvg" successfully created
[root@xserver1 ~]# vgdisplay 
  --- Volume group ---
  VG Name               centos
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               39.51 GiB
  PE Size               4.00 MiB
  Total PE              10114
  Alloc PE / Size       10103 / 39.46 GiB
  Free  PE / Size       11 / 44.00 MiB
  VG UUID               Izpuld-2eFu-xP0t-Z9Pv-lHAo-L357-DITdew
   
  --- Volume group ---
  VG Name               xcloudvg
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               9.97 GiB
  PE Size               16.00 MiB
  Total PE              638
  Alloc PE / Size       0 / 0   
  Free  PE / Size       638 / 9.97 GiB
  VG UUID               eS7C8C-gZRY-PCWq-m2fL-psNA-ILLe-o2ZHZ3

OpenStack Cinder管理(40分)

使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。使用Cinder服务,创建名为“lvm”的卷类型,然后创建一块带“lvm” 标识的云硬盘,名称为 BlockVloume,大小为 2G,查询该云硬盘详细信息。完成后,将cinder show BlockVloume命令的返回结果以文本形式提交到答题框。

1、使用cinder服务,创建名为“lvm”的卷类型

[root@controller ~]# cinder type-create lvm 
+--------------------------------------+------+-------------+-----------+
|                  ID                  | Name | Description | Is_Public |
+--------------------------------------+------+-------------+-----------+
| 35c116c3-ca5c-4350-8d4e-74dc1886535f | lvm  |      -      |    True   |
+--------------------------------------+------+-------------+-----------+            

2、创建一块带“lvm”标识的云硬盘,名称为BlockVloume,大小为 2G

[root@controller ~]# cinder create --name BlockVloume --volume-type lvm 2
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2020-12-05T10:37:12.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | 18dcd70a-951a-4814-86f4-fffbf3e39d1d |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |             BlockVloume              |
|     os-vol-host-attr:host      |                 None                 |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   f9ff39ba9daa4e5a8fee1fc50e2d2b34   |
|       replication_status       |               disabled               |
|              size              |                  2                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |               creating               |
|           updated_at           |                 None                 |
|            user_id             |   0befa70f767848e39df8224107b71858   |
|          volume_type           |                 lvm                  |
+--------------------------------+--------------------------------------+

3、使用cinder show BlockVloume查看创建的云硬盘的具体信息

[root@controller ~]# cinder show BlockVloume
+--------------------------------+--------------------------------------+
|            Property            |                Value                 |
+--------------------------------+--------------------------------------+
|          attachments           |                  []                  |
|       availability_zone        |                 nova                 |
|            bootable            |                false                 |
|      consistencygroup_id       |                 None                 |
|           created_at           |      2020-12-05T10:37:12.000000      |
|          description           |                 None                 |
|           encrypted            |                False                 |
|               id               | 18dcd70a-951a-4814-86f4-fffbf3e39d1d |
|            metadata            |                  {}                  |
|        migration_status        |                 None                 |
|          multiattach           |                False                 |
|              name              |             BlockVloume              |
|     os-vol-host-attr:host      |          controller@lvm#LVM          |
| os-vol-mig-status-attr:migstat |                 None                 |
| os-vol-mig-status-attr:name_id |                 None                 |
|  os-vol-tenant-attr:tenant_id  |   f9ff39ba9daa4e5a8fee1fc50e2d2b34   |
|       replication_status       |               disabled               |
|              size              |                  2                   |
|          snapshot_id           |                 None                 |
|          source_volid          |                 None                 |
|             status             |              available               |
|           updated_at           |      2020-12-05T10:37:14.000000      |
|            user_id             |   0befa70f767848e39df8224107b71858   |
|          volume_type           |                 lvm                  |
+--------------------------------+--------------------------------------+

OpenStack Glance管理(40分)

使用VMWare软件启动提供的opensatckallinone镜像,自行检查openstack中各服务的状态,若有问题自行排查。在xserver1节点的/root目录下存在一个cirros-0.3.4-x86_64-disk.img镜像;使用glance命令将镜像上传,并命名为mycirros,最后将glance image-show id命令的返回结果以文本形式提交到答题框。

1、在xserver1节点上使用scp命令将cirros-0.3.4-x86_64-disk.img镜像复制到controller节点

[root@xserver1 ~]# scp /root/cirros-0.3.4-x86_64-disk.img root@192.168.100.10:/root
The authenticity of host '192.168.100.10 (192.168.100.10)' can't be established.
ECDSA key fingerprint is 22:81:9a:b2:82:ca:1d:d4:43:de:6b:7b:fb:3c:ba:18.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.10' (ECDSA) to the list of known hosts.
root@192.168.100.10's password: 
cirros-0.3.4-x86_64-disk.img                              100%   13MB  12.7MB/s   00:00  
[root@controller ~]# ls
anaconda-ks.cfg  cirros-0.3.4-x86_64-disk.img

2、使用glance命令将镜像上传,并命名为mycirros

[root@controller ~]# glance image-create --name mycirros \
> --disk-format qcow2 \
> --container-format bare \
> --progress < /root/cirros-0.3.4-x86_64-disk.img
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2020-12-05T11:12:50Z                 |
| disk_format      | qcow2                                |
| id               | d5ac0c42-d18f-40bc-a032-65da13530a39 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | mycirros                             |
| owner            | f9ff39ba9daa4e5a8fee1fc50e2d2b34     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2020-12-05T11:12:51Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+

3、查看镜像详细信息

[root@controller ~]# glance image-list
+--------------------------------------+----------+
| ID                                   | Name     |
+--------------------------------------+----------+
| d5ac0c42-d18f-40bc-a032-65da13530a39 | mycirros |
+--------------------------------------+----------+
[root@controller ~]# glance image-show d5ac0c42-d18f-40bc-a032-65da13530a39 
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ee1eca47dc88f4879d8a229cc70a07c6     |
| container_format | bare                                 |
| created_at       | 2020-12-05T11:12:50Z                 |
| disk_format      | qcow2                                |
| id               | d5ac0c42-d18f-40bc-a032-65da13530a39 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | mycirros                             |
| owner            | f9ff39ba9daa4e5a8fee1fc50e2d2b34     |
| protected        | False                                |
| size             | 13287936                             |
| status           | active                               |
| tags             | []                                   |
| updated_at       | 2020-12-05T11:12:51Z                 |
| virtual_size     | None                                 |
| visibility       | private                              |
+------------------+--------------------------------------+

OpenStack Neutron管理(40分)

使用VMWare软件启动提供的opensatck all in one镜像,自行检查openstack中各服务的状态,若有问题自行排查。在dashboard界面创建云主机外部网络ext-net,子网为ext-subnet,云主机浮动IP可用网段为192.168.200.100 ~ 192.168.200.200,网关为192.168.200.1。创建云主机内部网络int-net1,子网为int-subnet1,云主机子网IP可用网段为10.0.0.100 ~ 10.0.0.200,网关为10.0.0.1。添加名为ext-router的路由器,添加网关在ext-net网络,添加内部端口到int-net1网络,完成内部网络int-net1和外部网络的连通。将执行neutron router-show ext-router命令所返回的结果以文本形式提交到答题框。

1、在浏览器的地址栏输入http://192.168.100.10/dashboard/访问openstack登陆界面

image-20201205113415465

2、创建云主机外部网络ext-net,子网为ext-subnet,云主机浮动IP可用网段为192.168.200.100~192.168.200.200,网关为192.168.200.1

(1)在项目-网络-网络中点击“创建网络”

image-20201205114139339

(2)配置外部网络名

image-20201205114351512

(3)配置外部网络的子网的名称、网络地址和网关IP

image-20201205114530688

(4)配置子网的DHCP地址池,点击“已创建”完成外部网络创建

image-20201205114908085

image-20201205115024834

3、绑定外部网络

(1)在管理员-系统-网络中,在创建的外部网络ext-net中点击“编辑网络”

image-20201205120217046

(2)勾选“外部网络”复选框,点击保存,完成外部网络绑定

image-20201205120311211

4、创建云主机内部网络int-net1,子网为int-subnet1,云主机子网IP可用网段为10.0.0.100~10.0.0.200,网关为10.0.0.1

(1)在项目-网络-网络中点击“创建网络”

image-20201205121134366

(2)配置内部网络名

image-20201205121203415

(3)配置内部网络的子网的名称、网络地址和网关IP

image-20201205121249562

(4)配置子网的DHCP地址池,点击已创建完成内部网络创建

image-20201205121344455

5、添加名为ext-router的路由器,添加网关在ext-net网络,添加内部端口到int-net1网络

(1)在项目-网络-路由中点击“新建路由”

image-20201205121702158

(2)配置路由名称,选择外部网络,点击“新建路由”完成创建

image-20201205122204423

(3)点击“ext-router”,进入路由设置

image-20201205122305130

(4)在路由设置中的“接口”,点击“增加接口”

image-20201205122409381

(5)选择子网,点击提交

image-20201205122604796

image-20201205132841581

6、查看路由器详细信息

[root@controller ~]# neutron router-list
+---------------------------+------------+---------------------------+-------------+-------+
| id                        | name       | external_gateway_info     | distributed | ha    |
+---------------------------+------------+---------------------------+-------------+-------+
| a1801d0a-21a2-470e-a3bc-  | ext-router | {"network_id":            | False       | False |
| b07e5fcb93fc              |            | "d6c1c697-9276-462f-      |             |       |
|                           |            | 92a4-d4bf8bfefe35",       |             |       |
|                           |            | "enable_snat": true,      |             |       |
|                           |            | "external_fixed_ips":     |             |       |
|                           |            | [{"subnet_id": "fd50b74e- |             |       |
|                           |            | c406-4871-9120-4ae66c9403 |             |       |
|                           |            | 18", "ip_address":        |             |       |
|                           |            | "192.168.200.101"}]}      |             |       |
+---------------------------+------------+---------------------------+-------------+-------+
[root@controller ~]# neutron router-show a1801d0a-21a2-470e-a3bc-b07e5fcb93fc
+-------------------------+----------------------------------------------------------------+
| Field                   | Value                                                          |
+-------------------------+----------------------------------------------------------------+
| admin_state_up          | True                                                           |
| availability_zone_hints |                                                                |
| availability_zones      | nova                                                           |
| description             |                                                                |
| distributed             | False                                                          |
| external_gateway_info   | {"network_id": "d6c1c697-9276-462f-92a4-d4bf8bfefe35",         |
|                         | "enable_snat": true, "external_fixed_ips": [{"subnet_id":      |
|                         | "fd50b74e-c406-4871-9120-4ae66c940318", "ip_address":          |
|                         | "192.168.200.101"}]}                                           |
| ha                      | False                                                          |
| id                      | a1801d0a-21a2-470e-a3bc-b07e5fcb93fc                           |
| name                    | ext-router                                                     |
| routes                  |                                                                |
| status                  | ACTIVE                                                         |
| tenant_id               | f9ff39ba9daa4e5a8fee1fc50e2d2b34                               |
+-------------------------+----------------------------------------------------------------+

Docker安装(30分)

使用xserver1节点,自行配置YUM源,安装docker服务(需要用到的包为xserver1节点/root目录下的Docker.tar.gz)。安装完服务后,将registry_latest.tar上传到xserver1节点中并配置为私有仓库。要求启动registry容器时,将内部保存文件的目录映射到外部的/opt/registry目录,将内部的5000端口映射到外部5000端口。依次将启动registry容器的命令及返回结果、执行docker info命令的返回结果以文本形式提交到答题框。

1、配置用于安装docker服务的yum源

(1)解压/root目录下的Docker压缩包

[root@xserver1 ~]# tar -zxvf Docker.tar.gz 

(2)在/etc/yum.repo.d/目录下创建文件docker.repo

[root@xserver1 ~]# vim /etc/yum.repos.d/docker.repo
[docker]
name=docker
baseurl=file:///root/Docker
enabled=1
gpgcheck=0

(3)验证yum源是否可用

[root@xserver1 ~]# yum repolist all

image-20201205143117749

2、开启路由转发,在/etc/sysctl.conf配置文件中添加net.ipv4.ip_forward=1(0为禁用,1为开启)

[root@xserver1 ~]# vim /etc/sysctl.conf 
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an /etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
[root@xserver1 ~]# sysctl -p    //加载系统参数,使修改的配置生效
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

3、安装Docker

(1)安装依赖包

[root@xserver1 ~]# yum install -y yum-utils device-mapper-persistent-data

(2)安装docker-ce

[root@xserver1 ~]# yum install -y docker-ce* containerd.io 

4、启动docker服务并配置docker开机自启

[root@xserver1 ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

5、配置私有仓库

(1)载入registry镜像

[root@xserver1 ~]# docker load -i images/registry_latest.tar
d9ff549177a9: Loading layer  4.671MB/4.671MB
f641ef7a37ad: Loading layer  1.587MB/1.587MB
d5974ddb5a45: Loading layer  20.08MB/20.08MB
5bbc5831d696: Loading layer  3.584kB/3.584kB
73d61bf022fd: Loading layer  2.048kB/2.048kB
Loaded image: registry:latest
[root@xserver1 ~]# docker images    //查看载入的镜像
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
registry            latest              f32a97de94e1        21 months ago       25.8MB

(2)启动registry容器时,将内部保存文件的目录映射到外部的/opt/registry目录,将内部的5000端口映射到外部5000端口

[root@xserver1 ~]# docker run -d -v /opt/registry:/var/lib/registry -p 5000:5000 --restart=always --name=registry registry:latest
dbd57b5b9b51df5038ee2a8a626a4bec5fd7080bf22daefee3a17205448125f4

选项说明
-d:后台运行
-v:将主机的本地目录/opt/registry目录挂载到容器的/var/lib/registry目录
-p:端口映射,将主机5000端口映射到容器的5000端口
–restart:在容器退出时总是重启容器
–name:设置容器名称

[root@xserver1 ~]# docker ps    //查看运行中的容器
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
dbd57b5b9b51        registry:latest     "/entrypoint.sh /etc…"   14 minutes ago      Up 13 minutes       0.0.0.0:5000->5000/tcp   registry

(3)在浏览器的地址栏输入http://192.168.100.11:5000/v2/,如果页面出现一对大括号,说明registry运行正常

image-20201205161202027

(4)查看Docker的系统信息

[root@xserver1 ~]# docker info
Containers: 1
 Running: 1
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 18.09.6
Storage Driver: devicemapper
 Pool Name: docker-253:0-67256878-pool
 Pool Blocksize: 65.54kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Udev Sync Supported: true
 Data file: /dev/loop1
 Metadata file: /dev/loop2
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 59.77MB
 Data Space Total: 107.4GB
 Data Space Available: 27.35GB
 Metadata Space Used: 688.1kB
 Metadata Space Total: 2.147GB
 Metadata Space Available: 2.147GB
 Thin Pool Minimum Free Space: 10.74GB
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.107-RHEL7 (2015-10-14)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-327.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.688GiB
Name: xserver1
ID: KL43:G572:KLEM:DY4I:3VMY:NG3I:PGHP:OC34:TQPR:KRLZ:ZVFY:FHHZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
         Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.

Docker运维(30分)

使用xserver1节点,上传nginx_latest.tar到xserver1节点中,然后将该镜像打标签,上传至私有仓库。使用xserver2节点,自行安装docker服务,配置xserver2节点使用xserver1的私有仓库,配置完毕后,在xserver2节点拉取nginx_latest.tar镜像。最后将在xserver2上执行docker images命令返回的结果以文本形式提交到答题框。

1、载入nginx镜像

[root@xserver1 ~]# docker load -i images/nginx_latest.tar

2、配置私有仓库

(1)编辑/etc/docker/daemon.json配置文件

[root@xserver1 ~]# vim /etc/docker/daemon.json
{
 "insecure-registries":["192.168.100.11:5000"]
}

(2)重启docker服务

[root@xserver1 ~]# systemctl restart docker

3、上传镜像到私有仓库

(1)使用docker tag命令标记本地镜像nginx:latest,将其归入本地私有仓库

[root@xserver1 ~]# docker tag nginx:latest 192.168.100.11:5000/nginx:latest

(2)使用docker push命令上传被标记的镜像

[root@xserver1 ~]# docker push 192.168.100.11:5000/nginx
The push refers to repository [192.168.100.11:5000/nginx]
a89b8f05da3a: Pushed 
6eaad811af02: Pushed 
b67d19e65ef6: Pushed 
latest: digest: sha256:f56b43e9913cef097f246d65119df4eda1d61670f7f2ab720831a01f66f6ff9c size: 948

(3)查看已上传的镜像

[root@xserver1 ~]# curl -L http://192.168.100.11:5000/v2/_catalog
{"repositories":["nginx"]}

4、xserver2节点上安装配置docker服务

(1)在xserver1上将/root/Docker目录下的软件复制到ftp共享目录/opt下

[root@xserver1 ~]# cp -rf /root/Docker /opt/

(2)配置xserver2上用于安装docker的yum源(ftp)

[root@xserver2 ~]# vim /etc/yum.repos.d/ftp.repo 
[centos]
name=centos
baseurl=ftp://xserver1/centos
enabled=1
gpgcheck=0

[docker]
name=centos
baseurl=ftp://xserver1/Docker
enabled=1
gpgcheck=0

(3)验证yum源是否可用

[root@xserver2 ~]# yum repolist all

image-20201205171600914

(4)安装依赖包,安装docker-ce

[root@xserver2 ~]# yum install -y yum-utils device-mapper-persistent-data
[root@xserver2 ~]# yum install -y docker-ce* containerd.io

(5)启动docker服务并配置docker开机自启

[root@xserver2 ~]# systemctl start docker && systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

5、配置xserver2节点使用xserver1的私有仓库

(1)编辑/etc/docker/daemon.json配置文件

[root@xserver2 ~]# vim /etc/docker/daemon.json
{
 "insecure-registries":["192.168.100.11:5000"]
}

(2)重启docker服务

[root@xserver2 ~]# systemctl restart docker

6、在xserver2节点拉取nginx:latest镜像

[root@xserver2 ~]# docker pull 192.168.100.11:5000/nginx:latest
latest: Pulling from nginx
8d691f585fa8: Pull complete 
5b07f4e08ad0: Pull complete 
abc291867bca: Pull complete 
Digest: sha256:f56b43e9913cef097f246d65119df4eda1d61670f7f2ab720831a01f66f6ff9c
Status: Downloaded newer image for 192.168.100.11:5000/nginx:latest
[root@xserver2 ~]# docker images
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
192.168.100.11:5000/nginx   latest              540a289bab6c        13 months ago       126MB

部署Swarm集群(60分)

使用xserver1、xserver2节点,自行配置好网络,安装好docker-ce。部署Swarm集群,并安装Portainer图形化管理工具,部署完成后,使用浏览器登录ip:9000界面,进入Swarm控制台。将curl swarm ip:9000返回的结果以文本形式提交到答题框。
1、配置Linux内网时间同步

(1)分别在xserver1、xserver2上安装chrony服务

[root@xserver1 ~]# yum -y install chrony
[root@xserver2 ~]# yum -y install chrony

(2)在xserver1上修改时间同步配置文件/etc/chrony.conf,注释默认时间同步源,设置本机为时间源

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# 注释以下四行
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
# 添加以下三行
server xserver1 iburst    # 设置本机为时间源
local stratum 10    # 允许与本机同步时间
allow all    # 允许所有连接

(3)重启chrony服务并设置开机自启,开启时间同步功能

[root@xserver1 ~]# systemctl restart chronyd && systemctl enable chronyd
[root@xserver1 ~]# timedatectl set-ntp true

(4)在xserver2上修改时间同步配置文件/etc/chrony.conf,使用命令注释默认时间同步源,设置xserver1为时间源;重启chrony服务并设置开机自启

[root@xserver2 ~]# sed -i 's/^server/#&/' /etc/chrony.conf
[root@xserver2 ~]# echo 'server xserver1 iburst' >> /etc/chrony.conf
[root@xserver2 ~]# systemctl restart chronyd && systemctl enable chronyd

(5)在xserver1和xserver2上执行命令chronyc sources,检查是否存在**^***开头的行,如果存在则时间同步成功

[root@xserver1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* xserver1                     10   8   377   32m   +181ns[  -19us] +/-   31us
[root@xserver2 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* xserver1                     11   6   377    13   -201us[ -288us] +/-  457ms

2、部署swarm集群

(1)在xserver1和xserver2上开启Docker API(下列只以xserver1操作展示,xserver2操作与xserver1同理)

[root@xserver1 ~]# vim /lib/systemd/system/docker.service 
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker

#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock    # 注释该行
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock    # 添加该行,开放管理端口映射 

ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

image-20201214093125685

[root@xserver1 ~]# systemctl daemon-reload && systemctl restart docker    //重载docker服务的配置文件,并且重启docker服务

(2)在xserver1上创建swarm集群,执行docker swarm init --advertise-addr <manager-ip>

[root@xserver1 ~]# docker swarm init --advertise-addr 192.168.100.11
Swarm initialized: current node (4ju3c4a8awbm1sir9qrtkzx0t) is now a manager.    // swarm创建成功

To add a worker to this swarm, run the following command:    // 添加worker node需要执行的命令

    docker swarm join --token SWMTKN-1-3gem33w1q1n3jm1kicibz6oi5m803ne0a53i2p0jdlljvgh9el-3c2emtce8luqzfbzt4fqeldl0 192.168.100.11:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.    // 添加manager node需要执行的命令。

(3)配置xserver2加入集群,复制上一步命令执行产生的添加worker node需要执行的命令,粘贴到xserver2上并执行(如果没有记录命令,可通过在xserver1上执行命令docker swarm join-token worker查看)

[root@xserver2 ~]# docker swarm join --token SWMTKN-1-3gem33w1q1n3jm1kicibz6oi5m803ne0a53i2p0jdlljvgh9el-3c2emtce8luqzfbzt4fqeldl0 192.168.100.11:2377
This node joined a swarm as a worker.

(4)在管理节点(xserver1)上查看集群状态

[root@xserver1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
4ju3c4a8awbm1sir9qrtkzx0t *   xserver1            Ready               Active              Leader              18.09.6
pt1hgz50086hvqo5plj58zyx5     xserver2            Ready               Active                                  18.09.6

3、在xserver1上,安装Portainer

(1)创建用于存储portainer数据的volume

[root@xserver1 ~]# docker volume create portainer_data
portainer_data

(2)创建portainer

docker service create \
--name portainer \
--publish 9000:9000 \
--replicas=1 \
--constraint 'node.role == manager' \
--mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock \
--mount type=volume,src=portainer_data,dst=/data \
portainer/portainer \
-H unix:///var/run/docker.sock
  • –publish 9000:9000:宿主机9000端口映射容器中的9000端口
  • –mount type=bind,src=//var/run/docker.sock,dst=/var/run/docker.sock:把宿主机的Docker守护进程(Docker daemon)默认监听的Unix域套接字挂载到容器中
  • –mount type=volume,src=portainer_data,dst=/data:把宿主机portainer_data数据卷挂载到容器/data目录;
  • –constraint ‘node.role == manager’: 指定容器运行在管理节点上
  • 10
    点赞
  • 42
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值