docker

docker 使用

helloworld流程

  • 运行原理图

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-m7hh7Wmi-1638250956235)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1637891348817.png)]

底层原理

docker是怎么工作的

docker是一个client - server结构的系统,Docker的守护进程运行在主机上。通过socket从客户端访问!

DockerServer 接收到 DockerClient的指令,就会执行这个命令!

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-X7ZaavYR-1638250956243)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1637894212406.png)]

docker为什么比虚拟机快

  1. docker比虚拟机有着更少的抽象层
  2. docker利用的是宿主机的内核,vm需要的是guest OS。
  3. [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-iF72tzL1-1638250956245)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1637894354428.png)]

所以说新建一个容器的时候,docker不需要像虚拟机一样重新加载一个操作系统内核,避免引导。虚拟机是加载guest OS ,分钟级别的,而docker是使用宿主机操作内核,省略了这个复杂过程,秒级!

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5VT0kxpV-1638250956247)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1637894625410.png)]

Docker的常用命令

帮助命令

docker version 		# docker版本信息
docker info			# docker的系统信息,包括镜像和容器的数量
docker --命令 --help # 万能命令	

帮助文档地址: https://docs.docker.com/reference/

镜像命令

  • docker images : 查看所有镜像
[root@Tancy ~]# docker images
REPOSITORY    TAG       IMAGE ID       CREATED        SIZE
hello-world   latest    feb5d9fea6a5   2 months ago   13.3kB

#REPOSITORY		镜像的仓库源
#TAG			镜像的标签
#IAMGE ID		镜像的ID
#CREATED 		镜像的创建时间
#SIZE			镜像的大小
  • docker search 搜索镜像
[root@Tancy ~]# docker search mysql
NAME                              DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
mysql                             MySQL is a widely used, open-source relation…   11728     [OK]
mariadb                           MariaDB Server is a high performing open sou…   4472      [OK]
mysql/mysql-server                Optimized MySQL Server Docker images. Create…   872                  [OK]
# 可选项
  --filter=STARS=5000 # 搜索收藏量在5000以上的
  [root@Tancy ~]# docker search mysql --filter=STARS=5000
NAME      DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
mysql     MySQL is a widely used, open-source relation…   11728     [OK]

  
  • docker pull 下载镜像
[root@Tancy ~]# docker pull mysql
Using default tag: latest # 如果不加tag就默认下载最新版
latest: Pulling from library/mysql
a10c77af2613: Pull complete	# 分层下载,docker image的核心 联合文件系统
b76a7eb51ffd: Pull complete
258223f927e4: Pull complete
2d2c75386df9: Pull complete
63e92e4046c9: Pull complete
f5845c731544: Pull complete
bd0401123a9b: Pull complete
3ef07ec35f1a: Pull complete
c93a31315089: Pull complete
3349ed800d44: Pull complete
6d01857ca4c1: Pull complete
4cc13890eda8: Pull complete
Digest: sha256:aeecae58035f3868bf4f00e5fc623630d8b438db9d05f4d8c6538deb14d4c31b
Status: Downloaded newer image for mysql:latest
docker.io/library/mysql:latest

# 等价
docker pull mysql
docker pull docker.io/library/mysql:latest

#指定版本下载
[root@Tancy ~]# docker pull mysql:5.7
5.7: Pulling from library/mysql
a10c77af2613: Already exists
b76a7eb51ffd: Already exists
258223f927e4: Already exists
2d2c75386df9: Already exists
63e92e4046c9: Already exists
f5845c731544: Already exists
bd0401123a9b: Already exists
2724b2da64fd: Pull complete
d10a7e9e325c: Pull complete
1c5fd9c3683d: Pull complete
2e35f83a12e9: Pull complete
Digest: sha256:7a3a7b7a29e6fbff433c339fc52245435fa2c308586481f2f92ab1df239d6a29
Status: Downloaded newer image for mysql:5.7
docker.io/library/mysql:5.7

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1w5N6rWX-1638250956248)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1637896155096.png)]

  • docker rmi 删除镜像
[root@Tancy ~]# docker rmi -f 镜像id	//删除镜像
[root@Tancy ~]# docker rmi -f 镜像id 镜像id 镜像id	//删除多个镜像
[root@Tancy ~]# docker rmi -f $(docker images -aq)   //删除所有镜像

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-D4znzH12-1638250956251)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1637896270432.png)]

容器命令

有了镜像才可以创建容器,linux,下载一个centos

docker pull centos

  • 新建容器并启动
docker run [可选参数] image

# 参数说明
 --name="Name"	 # 容器名字用于区分容器
 -d 			 # 后台运行
 -it			 # 使用交互方式运行,进入容器查看内容
 -p 			 # 指定容器的端口
 	-p ip:主机端口:容器端口
 	-p 主机端口:容器端口
 	-p 容器端口	
 -P				 # 随机指定端口
 
 
 # 测试,启动并进入容器
[root@Tancy ~]# docker run -it centos /bin/bash
[root@793586e382ab /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var

  • 退出容器
exit 	# 直接停止容器并退出
ctrl + P + Q 	# 容器不停止退出

  • docker ps 查看运行中的容器
# docker ps  // 查看所有正在运行的容器
	-a 		 // 查看所有正在运行的容器 + 历史运行的容器
	-n=? 	// 显示最近创建的?个容器
	-q 		// 只显示容器的编号
  • 删除容器
docker rm 容器id 			#删除指定容器 ,不能强制删除正在运行的容器 , rm -f 强制删除
docker rm $(docker ps -aq) #删除所有容器 
  • 启动和停止容器操作
docker start 容器id		# 启动容器
docker restart 容器id		# 重启容器
docker stop 容器id		# 停止当前正在运行的容器
docker kill 容器id 		# 强制停止当前容器

常用的其他命令

  • 后台启动命令
# 命令
docker run -d xxx

# 问题: docker ps ,发现应用没有
# docker 容器使用后台运行,就必须要有一个前台进程,docker 发现没有应用,就会自动停止
  • 查看日志
docker logs -f -t --tails 容器

# 显示日志
 -tf 			# 显示日志
 --tail number 	# 要显示的日志条数

  • 查看进程信息
# 命令 docker top 容器id

#测试
[root@Tancy ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS         PORTS     NAMES
f3a9b9dd52ff   centos    "/bin/bash -c 'while…"   5 minutes ago   Up 3 seconds             naughty_dirac
[root@Tancy ~]# docker top f3a9b9dd52ff
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                31602               31564               0                   14:14               ?                   00:00:00            /bin/bash -c while true;do echo 111111;sleep 2;done
root                31672               31602               0                   14:14               ?                   00:00:00            /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 2

  • 查看镜像的元数据
#命令 docker inspect 容器id

# 测试
[root@Tancy ~]# docker inspect f3a9b9dd52ff
[
    {
        "Id": "f3a9b9dd52ff356a8059cb1a1f27108d945bb8dc44354ff98c283a860ffc9f72",
        "Created": "2021-11-26T06:09:04.857689738Z",
        "Path": "/bin/bash",
        "Args": [
            "-c",
            "while true;do echo 111111;sleep 2;done"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 31602,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2021-11-26T06:14:08.170864645Z",
            "FinishedAt": "2021-11-26T06:13:12.060065581Z"
        },
        "Image": "sha256:5d0da3dc976460b72c77d94c8a1ad043720b0416bfc16c52c45d4847e53fadb6",
        "ResolvConfPath": "/var/lib/docker/containers/f3a9b9dd52ff356a8059cb1a1f27108d945bb8dc44354ff98c283a860ffc9f72/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/f3a9b9dd52ff356a8059cb1a1f27108d945bb8dc44354ff98c283a860ffc9f72/hostname",
        "HostsPath": "/var/lib/docker/containers/f3a9b9dd52ff356a8059cb1a1f27108d945bb8dc44354ff98c283a860ffc9f72/hosts",
        "LogPath": "/var/lib/docker/containers/f3a9b9dd52ff356a8059cb1a1f27108d945bb8dc44354ff98c283a860ffc9f72/f3a9b9dd52ff356a8059cb1a1f27108d945bb8dc44354ff98c283a860ffc9f72-json.log",
        "Name": "/naughty_dirac",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {},
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "host",
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/01df0ae1a3f72388585c206735d246e2461859da62ea85646488ed229509d29a-init/diff:/var/lib/docker/overlay2/eae954e1b871eb0e8a70a2603aeaebc0a801100345027100b884c533a3e2b07a/diff",
                "MergedDir": "/var/lib/docker/overlay2/01df0ae1a3f72388585c206735d246e2461859da62ea85646488ed229509d29a/merged",
                "UpperDir": "/var/lib/docker/overlay2/01df0ae1a3f72388585c206735d246e2461859da62ea85646488ed229509d29a/diff",
                "WorkDir": "/var/lib/docker/overlay2/01df0ae1a3f72388585c206735d246e2461859da62ea85646488ed229509d29a/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [],
        "Config": {
            "Hostname": "f3a9b9dd52ff",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/bash",
                "-c",
                "while true;do echo 111111;sleep 2;done"
            ],
            "Image": "centos",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
                "org.label-schema.build-date": "20210915",
                "org.label-schema.license": "GPLv2",
                "org.label-schema.name": "CentOS Base Image",
                "org.label-schema.schema-version": "1.0",
                "org.label-schema.vendor": "CentOS"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "4a54f269b8890c2b6c58983813795bf44a8dffee2943da73fa317011fcc408ba",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/4a54f269b889",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "2ae77c4f3aa34ea906833266d37f1cd6cbfbe99b2def986ad7ec8a5346f9fbb5",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "8039e9e6540d3a2edc4fa69dab84f6481b66f889856092dd75e2510644209d62",
                    "EndpointID": "2ae77c4f3aa34ea906833266d37f1cd6cbfbe99b2def986ad7ec8a5346f9fbb5",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]

  • 进入当前正在运行的容器
# 通常启动容器都是在后台启动的方式的运行的 ,需要进入容器,修改一些配置

#命令 
docker exec  -it 容器ID bashshell
#测试
[root@Tancy ~]# docker ps
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS          PORTS     NAMES
f3a9b9dd52ff   centos    "/bin/bash -c 'while…"   24 minutes ago   Up 19 minutes             naughty_dirac
[root@Tancy ~]# docker exec -it f3a9b9dd52ff /bin/bash
[root@f3a9b9dd52ff /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@f3a9b9dd52ff /]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 06:14 ?        00:00:00 /bin/bash -c while true;do echo 111111;sleep 2;done
root       608     0  0 06:34 pts/0    00:00:00 /bin/bash
root       639     1  0 06:34 ?        00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 2
root       640   608  0 06:34 pts/0    00:00:00 ps -ef

# 方式二 
docker attach 容器id


# docker exec #进入容器后打开一个新的终端,可以操作
# docker attach # 进入正在运行的容器终端,不会启动新的进程
  • 从容器拷贝内容到主机上
# 命令 docker cp 容器id:目标路径本地路径

[root@2a73ef914de2 /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@2a73ef914de2 /]# cd /home
[root@2a73ef914de2 home]# touch test.java
[root@2a73ef914de2 home]# ls
test.java
[root@2a73ef914de2 home]# exit
exit
[root@Tancy ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@Tancy ~]# docker ps -a
CONTAINER ID   IMAGE    COMMAND     CREATED         STATUS                          PORTS     NAMES
2a73ef914de2   centos    "/bin/bash"   45 seconds ago   Exited (0) 10 seconds ago         
[root@Tancy ~]# docker cp 2a73ef914de2:/home/test.java /home
[root@Tancy ~]# cd /home
[root@Tancy home]# ls
ch  chongyang  f2  Tancy  test.java  www

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6Wk18fkN-1638250956253)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638172901738.png)]

容器数据卷

什么是容器数据卷

docker的理念

将应用和环境打包成一个镜像

数据?如果数据在容器中,那么我们删除容器,数据也会丢失。

需求 :容器间有个可以数据共享的技术,数据可以存储在本地。

这就是卷技术,目录的挂载,将容器内的目录挂载到Linux上。

使用数据卷

-v 宿主机目录:容器内路径

DockerFile

DockerFile指令

FROM			#基础镜镜像,—切从这里开始构建
MAINTAINER		#镜像是谁写的,姓名+邮箱
RUN				#镜像构建的时候需要运行的命令
ADD				#步骤: tomcat镜像,这个tomcat压缩包!添加内容
WORKDIR			#镜像的工作目录
VOLUME			#挂载的目录
EXPOSE			#保留端口配置
CMD				#指定这个容器启动的时候要运行的命令,只有最后一个会生效,可被替代
ENTRYPOINT		#指定这个容器启动的时候要运行的命令,可以追如命令
ONBUILD 		#当执行一个被继承的dokcerfile 这个时候就会运行ONBUILD 的指令。触发指令。
COPY 			#将文件拷贝到镜像中
ENV				#构建的时候设置环境变量

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-S4cvzQjh-1638250956254)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638156113229.png)]

测试

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-SkI7NGAz-1638250956256)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638156347068.png)]

创建一个自己的centos

  • 编写一个dockerfile文件

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qcV1TLTy-1638250956257)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638156975831.png)]

  • 构建镜像

docker build -f testdockerfile -t mycentos:0.1 .

  • 测试

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-W05ZKBYa-1638250956258)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638157481305.png)]

CMD和ENTRYPOINT的区别

  • CMD测试
FROM centos 

CMD ["ls","-a"]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-WelbEbK7-1638250956259)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638164273376.png)]

添加参数-l 报错

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-nqDoj44L-1638250956261)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638164449054.png)]

运行修改为 ls -al

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6DZYjdWK-1638250956262)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638164490189.png)]

  • ENTRYPOINT测试
FROM centos 

ENTRYPOINT ["ls","-a"]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-LUlq3sX1-1638250956263)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638164688317.png)]

  • 添加-l参数

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Et8vV1oI-1638250956264)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638164718474.png)]

总结:CMD在执行docker run的时候在后面不可以追加命令只能是写一个完整的命令来替换掉CMD的命令内容,ENTRYPOINT是可以在docker run的时候来追加命令

实战:tomcat镜像

  • 准备tomcat压缩包,jdk压缩包

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-S0BI6rth-1638250956266)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638165855740.png)]

  • 编写Dockerfile文件
FROM centos

MAINTAINER tancy<1178486292@qq.com>

COPY readme.text /usr/local/readme.text

ADD apache-tomcat-10.0.13.tar.gz /usr/local

ADD OpenJDK8U-jdk_x64_linux_hotspot_8u312b07.tar.gz /usr/local

RUN yum -y install vim

ENV MYPATH /usr/local

WORKDIR $MYPATH

ENV JAVA_HOME /usr/local/jdk8u312-b07
ENV CLASSPATH $JAVAHOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

ENV CATALINA_HOME /usr/local/apache-tomcat-10.0.13
ENV CATALINA_BASH /usr/local/apache-tomcat-10.0.13
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin

EXPOSE 8080

CMD /usr/local/apache-tomcat-10.0.13/bin/startup.sh && tail -F /usr/local/apache-tomcat-10.0.13/bin/logs/catalina.out

  • 构建镜像(文件名为Dockerfile所以不用指定文件名如果不是这个得-f 指定文件)
docker build -t diytomcat .
  • 启动镜像
docker run -d --name diytomcat11 -p 30005:8080 -v /home/Tancy/test/:/usr/local/apache-tomcat-10.0.13/webapps/test -v /home/Tancy/diytomcatlogs/:/usr/local/apache-tomcat-10.0.13/logs/ diytomcat02
  • 访问测试

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mvDGHvz7-1638250956267)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638169989127.png)]

  • 发布项目

在挂载的目录中新建web.xml ,index.jsp文件

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.4" 
    xmlns="http://java.sun.com/xml/ns/j2ee" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee 
        http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd">
</web-app>
 <%@ page language="java" contentType="text/html; charset=utf-8"
        pageEncoding="utf-8"%>
    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
    <html>
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <title>my first jsp page</title>
    </head>
    <body>
        Hello World
        <% System.out.printlen("---welcom---");%>
    </body>
    </html>

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ir3xGvIs-1638250956268)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638170180947.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-aVEe5AKY-1638250956269)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638170172969.png)]

发布自己的镜像

DockerHub

1、地址:https://hub.docker.com/ 使用自己的账户

2、在服务器上登录

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-aN5QSsFX-1638250956271)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638170683440.png)]

3、 上传镜像 可能先要给镜像命名一个tag

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-lfOhh819-1638250956272)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638171474955.png)]

阿里云镜像服务

登录并创建仓库后有详细教程

Docker网络

理解Docker0

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-kE2p411U-1638250956274)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638175746709.png)]

三个网络

  • docker如何处理容器网络访问的
[root@Tancy home]# docker run -d -P --name tomcat1 tomcat

再次ip addr 多了一个网络

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-PW9JodjC-1638250956275)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638176401340.png)]

原理:1、我们每启动一个docker容器,docker就会给docker容器分配一个ip,我们只要安装了docker,就会有一个网卡docker0桥接模式,使用的技术是 veth-pair技术!

# 我们发现这个容器带来网卡,都是一对对的
# evth-pair 就是一对的虚拟设备接口,他们都是成对出现的,一段连着协议,一段彼此相连
# 正因为有这个特性,evth-pair充当一个桥梁,连接各种虚拟网络设备的
# OpenStac,Docker容器之间的连接,0VS的连接,都是使用evth-pair 技术

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-veTkDHpu-1638250956276)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638177342582.png)]

小结

Docker 使用的是linux桥接,宿主机中是一个Docker容器的网桥 docker0

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-pfD3zfKB-1638250956277)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638177691010.png)]

只要容器删除,veth pair产生的一对网桥就会消失

  • –link 可以实现通过容器名连接

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-r63S2FD8-1638250956278)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638178867186.png)]

本质探究:–link就是我们在hosts配置中增加了一个172.18.0.3 tomcat02 312857784cd4我们现在玩Docker已经不建议使用–link 了 !

自定义网络!不适用docker0!

docker0问题:他不支持容器名连接访问!

自定义网络

  • 查看所有网络

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4EPciONM-1638250956280)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638179062510.png)]

网络模式

bridge :桥接docker(默认,自己创建也使用bridge模式)

none :不配置网络

host :和宿主机共享网络

container:容器网络连通!(用的少!局限很大)

  • 自定义一个网络
# 自定义网络
# --driver bridge
# --subnet 192.168.0.0/16
# --gateway 192.168.0.1
[root@Tancy home]# docker network create --driver bridge --subnet 192.168.0.0/16 --gateway 192.168.0.1 mynet
32a2df530ac337b26bc2245c335960f0cc98d590e798753f1bc6aba4c7dbc804

[root@Tancy home]# docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
fb33420c0dda   bridge    bridge    local
921b7be12c76   host      host      local
32a2df530ac3   mynet     bridge    local
d6ed35780548   none      null      local
#查看自定义网络配置
[root@Tancy ~]# docker inspect 32a2df530ac3
[
    {
        "Name": "mynet",
        "Id": "32a2df530ac337b26bc2245c335960f0cc98d590e798753f1bc6aba4c7dbc804",
        "Created": "2021-11-29T17:51:26.580291252+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]
#启动容器使用自定义网络
[root@Tancy ~]# docker run -d -P --name tomcat1 --net mynet tomcat
84a35b9ffdd1730e0d7500fd671bdb0297a647d73d028a257f2e4db4253ca476
[root@Tancy ~]# docker run -d -P --name tomcat2 --net mynet tomcat
430bf9132db548fa5418557e8b5015b8c8041614e591c38b81245ee13267a9ee
[root@Tancy ~]# docker inspect 32a2df530ac3
[
    {
        "Name": "mynet",
        "Id": "32a2df530ac337b26bc2245c335960f0cc98d590e798753f1bc6aba4c7dbc804",
        "Created": "2021-11-29T17:51:26.580291252+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "430bf9132db548fa5418557e8b5015b8c8041614e591c38b81245ee13267a9ee": {
                "Name": "tomcat2",
                "EndpointID": "d5872fd2bf96b13297854d6a79e09991861da47e22b4c0abe3308ffcce98805b",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            },
            "84a35b9ffdd1730e0d7500fd671bdb0297a647d73d028a257f2e4db4253ca476": {
                "Name": "tomcat1",
                "EndpointID": "d4a7ff555eef4fa0706df9c3bcf5a65084cf32b3e757946f583a302725b1a627",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

连接docker0到自定义网络中

# docker network connect [OPTIONS] 网络名 容器名


# 执行命令后就将容器放到了自定义网络的配置下面
[
    {
        "Name": "mynet",
        "Id": "32a2df530ac337b26bc2245c335960f0cc98d590e798753f1bc6aba4c7dbc804",
        "Created": "2021-11-29T17:51:26.580291252+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.0.0/16",
                    "Gateway": "192.168.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "430bf9132db548fa5418557e8b5015b8c8041614e591c38b81245ee13267a9ee": {
                "Name": "tomcat2",
                "EndpointID": "d5872fd2bf96b13297854d6a79e09991861da47e22b4c0abe3308ffcce98805b",
                "MacAddress": "02:42:c0:a8:00:03",
                "IPv4Address": "192.168.0.3/16",
                "IPv6Address": ""
            },
            "5db0c25bb6b354e1d37e9c6fe3c015ffaac9d12144c228049fc9b5e082a3a2fd": {
                "Name": "detomcat1",
                "EndpointID": "7647fa053ea55cd1e55ad8c1cbe36241c19590b5a6ecc9963392971282e36667",
                "MacAddress": "02:42:c0:a8:00:04",
                "IPv4Address": "192.168.0.4/16",
                "IPv6Address": ""
            },
            "84a35b9ffdd1730e0d7500fd671bdb0297a647d73d028a257f2e4db4253ca476": {
                "Name": "tomcat1",
                "EndpointID": "d4a7ff555eef4fa0706df9c3bcf5a65084cf32b3e757946f583a302725b1a627",
                "MacAddress": "02:42:c0:a8:00:02",
                "IPv4Address": "192.168.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

实战:Redis集群

  • 创建配置文件
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
done

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-riRDHzgV-1638250956281)(C:\Users\11784\AppData\Roaming\Typora\typora-user-images\1638238211425.png)]

  • 运行每个redis
#1
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

#2
docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

#3
docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

#4
docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

#5
docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

#6
docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf


#搭建集群
# redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1

/data # redis-cli --cluster create 172.38.0.11:6379 172.38.0.12:6379 172.38.0.13:6379 172.38.0.14:6379 172.38.0.15:6379 172.38.0.16:6379 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.38.0.15:6379 to 172.38.0.11:6379
Adding replica 172.38.0.16:6379 to 172.38.0.12:6379
Adding replica 172.38.0.14:6379 to 172.38.0.13:6379
M: d37cc128f1509ad95391c7c81e68f167acf9aa84 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
M: 840ae7a4e79faf7e8e42e0ac3e8f42094b1a5cfe 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: 5ba19d4bfc77f5c6e4b82d224a0a441f8c8148a5 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: 6a67a7b09d47229a5cdfd9776c0f9c645014da70 172.38.0.14:6379
   replicates 5ba19d4bfc77f5c6e4b82d224a0a441f8c8148a5
S: d88e91db4d26f1490613dd63b6ed7366deed630b 172.38.0.15:6379
   replicates d37cc128f1509ad95391c7c81e68f167acf9aa84
S: ea6e62073dc3bcc131eca62c59276ae0a9817e99 172.38.0.16:6379
   replicates 840ae7a4e79faf7e8e42e0ac3e8f42094b1a5cfe
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....
>>> Performing Cluster Check (using node 172.38.0.11:6379)
M: d37cc128f1509ad95391c7c81e68f167acf9aa84 172.38.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: d88e91db4d26f1490613dd63b6ed7366deed630b 172.38.0.15:6379
   slots: (0 slots) slave
   replicates d37cc128f1509ad95391c7c81e68f167acf9aa84
M: 5ba19d4bfc77f5c6e4b82d224a0a441f8c8148a5 172.38.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 840ae7a4e79faf7e8e42e0ac3e8f42094b1a5cfe 172.38.0.12:6379
   slots:[5461-10922] (5462 slots) master
S: 6a67a7b09d47229a5cdfd9776c0f9c645014da70 172.38.0.14:6379
   slots: (0 slots) slave
   replicates 5ba19d4bfc77f5c6e4b82d224a0a441f8c8148a5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

#连接集群并查看集群信息

/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:5
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_ping_sent:87
cluster_stats_messages_pong_sent:114
cluster_stats_messages_sent:201
cluster_stats_messages_ping_received:109
cluster_stats_messages_pong_received:87
cluster_stats_messages_meet_received:5

# 测试
127.0.0.1:6379> cluster nodes
d88e91db4d26f1490613dd63b6ed7366deed630b 172.38.0.15:6379@16379 slave d37cc128f1509ad95391c7c81e68f167acf9aa84 0 1638242180359 5 connected
5ba19d4bfc77f5c6e4b82d224a0a441f8c8148a5 172.38.0.13:6379@16379 master - 0 1638242181000 3 connected 10923-16383
840ae7a4e79faf7e8e42e0ac3e8f42094b1a5cfe 172.38.0.12:6379@16379 master - 0 1638242181362 2 connected 5461-10922
6a67a7b09d47229a5cdfd9776c0f9c645014da70 172.38.0.14:6379@16379 slave 5ba19d4bfc77f5c6e4b82d224a0a441f8c8148a5 0 1638242180000 4 connected
d37cc128f1509ad95391c7c81e68f167acf9aa84 172.38.0.11:6379@16379 myself,master - 0 1638242180000 1 connected 0-5460
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
172.38.0.13:6379> geta
^C
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"b"
ing_sent:87
cluster_stats_messages_pong_sent:114
cluster_stats_messages_sent:201
cluster_stats_messages_ping_received:109
cluster_stats_messages_pong_received:87
cluster_stats_messages_meet_received:5

测试

127.0.0.1:6379> cluster nodes
d88e91db4d26f1490613dd63b6ed7366deed630b 172.38.0.15:6379@16379 slave d37cc128f1509ad95391c7c81e68f167acf9aa84 0 1638242180359 5 connected
5ba19d4bfc77f5c6e4b82d224a0a441f8c8148a5 172.38.0.13:6379@16379 master - 0 1638242181000 3 connected 10923-16383
840ae7a4e79faf7e8e42e0ac3e8f42094b1a5cfe 172.38.0.12:6379@16379 master - 0 1638242181362 2 connected 5461-10922
6a67a7b09d47229a5cdfd9776c0f9c645014da70 172.38.0.14:6379@16379 slave 5ba19d4bfc77f5c6e4b82d224a0a441f8c8148a5 0 1638242180000 4 connected
d37cc128f1509ad95391c7c81e68f167acf9aa84 172.38.0.11:6379@16379 myself,master - 0 1638242180000 1 connected 0-5460
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.38.0.13:6379
OK
172.38.0.13:6379> geta
^C
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.38.0.14:6379
"b"
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值