DOCK-DTR-UCP搭建记录

#贴出来前段时间搭建docker记录。
搭建开始
容器技术是和我们的宿主机共享硬件资源及操作系统,可以实现资源的动态分配。容器包含应用和其所有的依赖包,但是与其他容器共享内核。容器在宿主机操作系统中,在用户空间以分离的进程运行
Docker 属于 Linux 容器的一种封装,提供简单易用的容器使用接口。
它是目前最流行的 Linux 容器解决方案。
而 Linux 容器是 Linux 发展出了另一种虚拟化技术,简单来讲, Linux 容器不是模拟一个完整的操作系统,而是对进程进行隔离,相当于是在正常进程的外面套了一个保护层。对于容器里面的进程来说,它接触到的各种资源都是虚拟的,从而实现与底层系统的隔离。
Docker 将应用程序与该程序的依赖,打包在一个文件里面。运行这个文件,就会生成一个虚拟容器。程序在这个虚拟容器里运行,就好像在真实的物理机上运行一样。有了 Docker ,就不用担心环境问题。
总体来说, Docker 的接口相当简单,用户可以方便地创建和使用容器,把自己的应用放入容器。容器还可以进行版本管理、复制、分享、修改,就像管理普通的代码一样。

Docker的优势(不知道在哪里复制的)
Docker相比于传统虚拟化方式具有更多的优势:
docker 启动快速属于秒级别。虚拟机通常需要几分钟去启动
docker 需要的资源更少, docker 在操作系统级别进行虚拟化, docker 容器和内核交互,几乎没有性能损耗,性能优于通过 Hypervisor 层与内核层的虚拟化
docker 更轻量, docker 的架构可以共用一个内核与共享应用程序库,所占内存极小。同样的硬件环境, Docker 运行的镜像数远多于虚拟机数量,对系统的利用率非常高
与虚拟机相比, docker 隔离性更弱, docker 属于进程之间的隔离,虚拟机可实现系统级别隔离
安全性: docker 的安全性也更弱。 Docker 的租户 root 和宿主机 root 等同,一旦容器内的用户从普通用户权限提升为root权限,它就直接具备了宿主机的root权限,进而可进行无限制的操作。虚拟机租户 root 权限和宿主机的 root 虚拟机权限是分离的,并且虚拟机利用如 Intel 的 VT-d 和 VT-x 的 ring-1 硬件隔离技术,这种隔离技术可以防止虚拟机突破和彼此交互,而容器至今还没有任何形式的硬件隔离,这使得容器容易受到攻击
可管理性: docker 的集中化管理工具还不算成熟。各种虚拟化技术都有成熟的管理工具,例如 VMware vCenter 提供完备的虚拟机管理能力
高可用和可恢复性: docker 对业务的高可用支持是通过快速重新部署实现的。虚拟化具备负载均衡,高可用,容错,迁移和数据保护等经过生产实践检验的成熟保障机制, VMware 可承诺虚拟机 99.999% 高可用,保证业务连续性
快速创建、删除:虚拟化创建是分钟级别的, Docker 容器创建是秒级别的, Docker 的快速迭代性,决定了无论是开发、测试、部署都可以节约大量时间
交付、部署:虚拟机可以通过镜像实现环境交付的一致性,但镜像分发无法体系化。 Docker 在 Dockerfile 中记录了容器构建过程,可在集群中实现快速分发和快速部署

安装前的准备工作
1, 环境校验
https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh
使用校验脚本
<<EOF:
ubuntu18.04 对部分内存配置未能启用
进入/boot/balabala.-0.15 config 内核
安装zfs文件系统
sudo apt install zfsutils
EOF
2 开始安装
指导博客
https://blog.csdn.net/deng624796905/article/details/86493330
<<EOF:
解决源不受信任
https://blog.csdn.net/chenbetter1996/article/details/80255552
删除或者更名etc/…list.bak
官方安装指导
https://docs.docker.com/install/linux/docker-ce/ubuntu/
EOF

##start
imagge container registry 三大部分
容器化使CI / CD无缝。例如:
应用程序没有系统依赖性
可以将更新推送到分布式应用程序的任何部分
资源密度可以优化。
使用Docker,扩展应用程序是一个新的可执行文件,而不是运行繁重的VM主机
CI/CD
CI / CD代表“持续集成/持续交付”,这种方法通过协作和自动化简化软件开发,是实施DevOps的关键组件

命令
docker info
docker image ls
docker container ls
docker build --tag frindlyhello:v0.0.1
sudo service docker restart
docker run -p 4000:80 friendlyhello

控制和删除
1.停止所有的container,这样才能够删除其中的images:
docker stop $(docker ps -a -q)
如果想要删除所有container的话再加一个指令:
docker rm $(docker ps -a -q)
2.查看当前有些什么images
docker images
3.删除images,通过image的id来指定删除谁
docker rmi
想要删除untagged images,也就是那些id为的image的话可以用
docker rmi $(docker images | grep “^” | awk “{print $3}”)
要删除全部image的话
docker rmi $(docker images -q)

其他平台运行
安装docker trusted registy DTR
DRT 是运行在Docker Universal Control Plane(UCP) 集群上面的服务

##安装 docker universal control plane 集群
ucp 是在docker引擎和docker registry之间的服务提供透明的docker api 集群和管理
manage节点
worker节点

其中manage节点运行各种全局服务,让ucp实现高可用
UCP component Description
k8s_calico-kube-controllers A cluster-scoped Kubernetes controller used to coordinate Calico networking. Runs on one manager node only.
k8s_calico-node The Calico node agent, which coordinates networking fabric according to the cluster-wide Calico configuration. Part of the calico-node daemonset. Runs on all nodes. Configure the container network interface (CNI) plugin by using the --cni-installer-url flag. If this flag isn’t set, UCP uses Calico as the default CNI plugin.
k8s_install-cni_calico-node A container that’s responsible for installing the Calico CNI plugin binaries and configuration on each host. Part of the calico-node daemonset. Runs on all nodes.
k8s_POD_calico-node Pause container for the calico-node pod.
k8s_POD_calico-kube-controllers Pause container for the calico-kube-controllers pod.
k8s_POD_compose Pause container for the compose pod.
k8s_POD_kube-dns Pause container for the kube-dns pod.
k8s_ucp-dnsmasq-nanny A dnsmasq instance used in the Kubernetes DNS Service. Part of the kube-dns deployment. Runs on one manager node only.
k8s_ucp-kube-compose A custom Kubernetes resource component that’s responsible for translating Compose files into Kubernetes constructs. Part of the compose deployment. Runs on one manager node only.
k8s_ucp-kube-dns The main Kubernetes DNS Service, used by pods to resolve service names. Part of the kube-dnsdeployment. Runs on one manager node only. Provides service discovery for Kubernetes services and pods. A set of three containers deployed via Kubernetes as a single pod.
k8s_ucp-kubedns-sidecar Health checking and metrics daemon of the Kubernetes DNS Service. Part of the kube-dns deployment. Runs on one manager node only.
ucp-agent Monitors the node and ensures the right UCP services are running.
ucp-auth-api The centralized service for identity and authentication used by UCP and DTR.
ucp-auth-store Stores authentication configurations and data for users, organizations, and teams.
ucp-auth-worker Performs scheduled LDAP synchronizations and cleans authentication and authorization data.
ucp-client-root-ca A certificate authority to sign client bundles.
ucp-cluster-root-ca A certificate authority used for TLS communication between UCP components.
ucp-controller The UCP web server.
ucp-dsinfo Docker system information collection script to assist with troubleshooting.
ucp-interlock Monitors swarm workloads configured to use Layer 7 routing. Only runs when you enable Layer 7 routing.
ucp-interlock-proxy A service that provides load balancing and proxying for swarm workloads. Only runs when you enable Layer 7 routing.
ucp-kube-apiserver A master component that serves the Kubernetes API. It persists its state in etcd directly, and all other components communicate with API server directly.
ucp-kube-controller-manager A master component that manages the desired state of controllers and other Kubernetes objects. It monitors the API server and performs background tasks when needed.
ucp-kubelet The Kubernetes node agent running on every node, which is responsible for running Kubernetes pods, reporting the health of the node, and monitoring resource usage.
ucp-kube-proxy The networking proxy running on every node, which enables pods to contact Kubernetes services and other pods, via cluster IP addresses.
ucp-kube-scheduler A master component that handles scheduling of pods. It communicates with the API server only to obtain workloads that need to be scheduled.
ucp-kv Used to store the UCP configurations. Don’t use it in your applications, since it’s for internal use only. Also used by Kubernetes components.
ucp-metrics Used to collect and process metrics for a node, like the disk space available.
ucp-proxy A TLS proxy. It allows secure access to the local Docker Engine to UCP components.
ucp-reconcile When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy.
ucp-swarm-manager Used to provide backwards-compatibility with Docker Swarm.
worker node
component Description
k8s_calico-node The Calico node agent, which coordinates networking fabric according to the cluster-wide Calico configuration. Part of the calico-node daemonset. Runs on all nodes.
k8s_install-cni_calico-node A container that’s responsible for installing the Calico CNI plugin binaries and configuration on each host. Part of the calico-node daemonset. Runs on all nodes.
k8s_POD_calico-node Pause container for the Calico-node pod. By default, this container is hidden, but you can see it by running docker ps -a.
ucp-agent Monitors the node and ensures the right UCP services are running
ucp-interlock-extension Helper service that reconfigures the ucp-interlock-proxy service based on the swarm workloads that are running.
ucp-interlock-proxy A service that provides load balancing and proxying for swarm workloads. Only runs when you enable Layer 7 routing.
ucp-dsinfo Docker system information collection script to assist with troubleshooting
ucp-kubelet The kubernetes node agent running on every node, which is responsible for running Kubernetes pods, reporting the health of the node, and monitoring resource usage
ucp-kube-proxy The networking proxy running on every node, which enables pods to contact Kubernetes services and other pods, via cluster IP addresses
ucp-reconcile When ucp-agent detects that the node is not running the right UCP components, it starts the ucp-reconcile container to converge the node to its desired state. It is expected for the ucp-reconcile container to remain in an exited state when the node is healthy.
ucp-proxy A TLS proxy. It allows secure access to the local Docker Engine to UCP components

#–pod-cidr 172.168.2.0/16 防止ip冲突
https://192.168.1.156 使用ucpweb平台

##安装DTR

值得注意的是 ucp集群平台管理已经使用了一个管理节点的443端口(默认)
此时再在次节点安装dtr就会出现节点不可用
<<EOF:
解决方法
一台机器只能作为一个节点。
1,使用其他节点
2, 让DTR/ucp其中一个不使用443端口
这里更改ucp webui/api 的端口号 更改为8088
EOF

##安装完成后 自定义域名不可映射到ip
需更改host
然后出现错误
Failed to establish openid authentication
https://success.docker.com/article/dtr-certificates-expired-error-failed-to-establish-openid-authentication

完成!

##开始登录docker pull push
docker login :在DTR上验证您的身份 192.168.1.1xx
docker pull ::从DTR中提取图像
docker push ::将图像推送到DTR

本地仓库如何推送?
本地image friendlyhello
首先registry 创建一个空仓库
然后执行
docker tag friendlyhello 192.168.1.199/ruiboma/yichen
docker push 192.168.1.199/ruiboma/yichen
objk!
https://docs.docker.com/ee/dtr/user/manage-images/pull-and-push-images/

安装部署mysql redis 集群
当机器重启后 dtr 服务好像不在了
重新配置地址 就可以登录了
https://success.docker.com/article/dtr-certificates-expired-error-failed-to-establish-openid-authentication

docker run -d -e MYSQL_ROOT_PASSWORD=root123456 mysql:5.7 --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci

快启动一个本地ip的mysql 8.0
<<EOF:
https://www.cnblogs.com/xyabk/p/10882913.html
mysql 8.0 新特性(顺手一看,总是用5.7也要跟上时代)
EOF
启动一个 redis
docker run -d --name redis -p 6379:6379 redis:latest --requirepass “123456” ### 可能导致不可用
sudo docker run -d --name redisfix -p 6379:6379 --restart=always redis ## 正确的启动
删除不想要的container 使用 docker rm containerid

不使用dockerfile创建镜像的方法
本机进入docker后进行环境配置,然后保存为镜像提交
就到这里了,实际使用方面,公司的mysql redis 集群短时间内是无法部署在docker里面了,不过从开发到上线的时间确实要缩短不少,尤其是面对运营 不断更改的需求。不再心类。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值