Mac VirtualBox Centos 安装运行Docker相关,涉及docker安装gogs,drone实现CI/CD,Dockerfile运行,Docker Compose运行


前言

提示:本文参考了较多网站的相关信息,Docker相关知识参考了尚硅谷B站雷神的教程:

本文包含Docker相关的学习资料,基本按照操作过,最后的部分,docker pull拉不下来


1.mac virtualBox 安装centos

阿里云的centos镜像下载地址:
https://mirrors.aliyun.com/centos/7.9.2009/isos/x86_64/?spm=a2c6h.25603864.0.0.161df5adzxHXWW
安装教程:
https://www.zhihu.com/tardis/bd/art/694736849?source_id=1001

2. 安装好的centos虚拟机报错更换cenos镜像源地址

https://blog.csdn.net/2401_83331026/article/details/140180985

备份默认的yum源:

sudo cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

下载阿里云的 CentOS-Base.repo 文件:

sudo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

清除缓存:

yum clean all

生成新的缓存:

yum makecache

更新系统(这一步费时,如果没什么其他问题,建议不处理):

sudo yum update

一个执行以上操作的bash脚本:

#!/bin/bash
# 备份
sudo cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 下载阿里云的 CentOS-Base.repo 文件
sudo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
 
# 清除缓存并生成新的缓存
sudo yum clean all
#防止镜像掉重新挂载
mount /dev/sr0 /mnt/
sudo yum makecache
 
# 更新系统
#sudo yum update

3. centos安装docker

https://blog.csdn.net/CSDN_Admin0/article/details/135481302

4. docker安装gogs–类似git

https://www.jianshu.com/p/a52ebe47b805

5. docker安装drone–代码CI

https://www.jianshu.com/p/64e5ab9f8b92
自己安装时使用的:

#安装postgresql
docker run -d --name postgres -e TZ="Asia/Shanghai" -e POSTGRES_PASSWORD="123456" \
    -v /var/lib/postgresql/data:/var/lib/postgresql/data:rw \
    -v /etc/localtime:/etc/localtime:ro \
    --restart=always \
    --privileged \
    postgres
#postgresql容器内的地址,上面教程有说到:   
    172.17.0.3
#docker运行drone-server,--env=DRONE_USER_CREATE=username:git,admin:true中的git为gogs安装时的管理员用户名,这一行设置了后,才会展示Trusted,DRONE_GOGS_SERVER为gogsweb访问的地址     
    docker run \
  -v /var/lib/drone:/data \
  -v /etc/localtime:/etc/localtime \
  --env=DRONE_DEBUG=true \
  --env=DRONE_LOGS_TRACE=true \
  --env=DRONE_LOGS_DEBUG=true \
  --env=DRONE_LOGS_PRETTY=true \
  --env=DRONE_AGENTS_ENABLED=true \
  --env=DRONE_GOGS_SERVER=http://192.168.2.222:8888 \
  --env=DRONE_SERVER_HOST=192.168.2.222:10082 \
  --env=DRONE_RPC_SECRET=123456 \
  --env=DRONE_SERVER_PROTO=http \
  --env=DRONE_USER_CREATE=username:git,admin:true \
  --publish=10082:80 \
  --restart=always \
  --detach=true \
  --name=drone-server \
  drone/drone:2
  
  ## docker运行drone的runner,DRONE_RPC_SECRET需要和server的DRONE_RPC_SECRET一致,DRONE_RPC_HOST和server的DRONE_SERVER_HOST一致
  docker run --detach \
  --volume=/var/run/docker.sock:/var/run/docker.sock \
  --env=DRONE_RPC_PROTO=http \
  --env=DRONE_RPC_HOST=192.168.2.222:10082 \
  --env=DRONE_RPC_SECRET=123456 \
  --env=DRONE_RUNNER_CAPACITY=2 \
  --env=DRONE_RUNNER_NAME=drone-runner \
  --publish=3000:3000 \
  --restart=always \
  --name=drone-runner \
  drone/drone-runner-docker:1
   
   ##docker运行mysql,因为代码运行容器时需要连接数据库
     docker run --name mysql \
              -p 3306:3306 \
              -e MYSQL_ROOT_PASSWORD=root \
              -d mysql:5.7
   ##运行后,记得创建数据库mall
   
   ##docker运行项目:
      docker run -p 8088:8088 \
              --name ${app_name} \
              -e 'spring.profiles.active'=${profile_active} \
              -e TZ="Asia/Shanghai" \
              -v /etc/localtime:/etc/localtime \
              -v /mydata/app/${app_name}/logs:/var/logs \
              -d ${group_name}/${app_name}:${app_version}
   
  访问的swagger-ui的地址:
  http://192.168.2.222:8088/swagger-ui/#/PmsBrandController

包含运行的代码:
链接: https://pan.baidu.com/s/19VrxvEFUGXHgjNX_QsijoA?pwd=g8df 提取码: g8df
–来自百度网盘超级会员v6的分享

代码参考了https://mp.weixin.qq.com/s/c7fs06dMdVr1Sxj1A1FGEA,在其基础上优化了maven clean package很慢的问题。使用 - export MAVEN_OPTS="-Dmaven.repo.local=/root/.m2/repository" - mvn -s settings.xml clean package这两行替代了:- mvn clean package # 应用打包命令,以及增加了settings.xml,其中配置了阿里云的镜像地址

上述docker以及docker容器,有的不是开机自启的,以下方式可设置:

systemctl enable docker.service
docker update --restart=always gogs 
docker update --restart=always mysql
docker update --restart=always mall-tiny-drone

6. docker设置开机自启

查看docker是否开机启动:

systemctl list-unit-files | grep docker

设置docker开机启动:

systemctl enable docker.service

取消docker开机启动:

systemctl disable docker.service

修改docker 容器自启动:

docker update --restart=always <容器名称>

容器启动时设置自启动-docker版:

docker run --restart=always <imageName>

容器启动时设置自启动-docker-compose版:

version: '3'
services:
  app:
    restart: always
    image: app-server:V1.0.0

7. docker停止/删除所有容器

#停止:
docker stop $(docker ps -aq)
#删除
docker rm $(docker ps -aq)

docker删除所有镜像:
docker image rm $(docker image ls -q)

8. 目录挂载和卷映射

docker run -d --name nginx -v /app/nginx/html:/usr/share/nginx/html -v nginx_conf:/etc/nginx -p 81:80 --restart always 1418f4f3ff9c
上面的部分
-v /app/nginx/html:/usr/share/nginx/html,这个是挂载目录,注意app/开头,容器内的文件,会在宿主机上有,但是内容是空的。
-v nginx_conf:/etc/nginx,这个是卷映射,注意nginx_conf不以/开头,容器启动时,会在宿主机创建文件,路径在/var/lib/docker/volumes/nginx_conf/_data/

9. 查看卷的详细信息以及容器的信息:

##卷信息,其中nginx_conf为挂载的卷,可以看到卷挂载的路径
docker volume inspect nginx_conf
##容器信息,其中省略的container,nginx为容器名称,可以看到容器的内部IP等信息
docker inspect nginx

10. docker网络相关

##查看存在的网络
docker network ls
##查看网络的详细信息,包含ip
docker network inspect
##创建名称为mynet的网络
docker network create mynet
##运行nginx,端口81,名称为app1,加入到创建的mynet网络
docker run -d --name app1 -p 81:80 --network mynet nginx
##运行nginx,端口82,名称为app2,加入到创建的mynet网络
docker run -d --name app2 -p 82:80 --network mynet nginx
##进入app1的容器内部
docker exec -it app1  bash
##访问容器二,这里完整的访问路径为http://app2:80,相当于容器名称作为域名
curl app2

11. docker redis主从读写分离,使用bitnami/redis

其中的-v /app/r1/data:/bitnami/redis/data \是挂载容器内的/bitnami/redis/data到宿主机的/app/r1/data,需要注意文件权限,可以使用chmod -R 777 /app/r1/data
--network mynet指定主从都在指定的网络中,可以通过--name进行通信,比如-e REDIS_MASTER_HOST=rd1指定了Redis的主节点为rd1这个容器。
-e指定的环境变量,在hub.docker.combitnami/redis可以看到说明,但是需要魔法

##redis主
docker run -d --name rd1 \
-p 6380:6379 \
-v /app/r1/data:/bitnami/redis/data \
--network mynet \
-e REDIS_REPLICATION_MODE=master \
-e REDIS_PASSWORD=123456 \
bitnami/redis

##redis从
docker run -d --name rd2 \
-p 6381:6379 \
-v /app/r2/data:/bitnami/redis/data \
--network mynet \
-e REDIS_REPLICATION_MODE=slave \
-e REDIS_MASTER_HOST=rd1 \
-e REDIS_MASTER_PORT_NUMBER=6379 \
-e REDIS_MASTER_PASSWORD=123456 \
-e REDIS_PASSWORD=123456 \
bitnami/redis

12. docker mysql8.0部署教程

https://blog.csdn.net/m0_73450879/article/details/135715427
以下是本地虚拟机运行情况,端口设置3307的,root密码设置的root

docker run \
--restart=always \
--name mysql8.0 \
-p 3307:3306 \
-v /app/mysql/data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root \
-d mysql:8.0.22    

ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password by 'root';
ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'root';
FLUSH PRIVILEGES;

13. docker安装mysql和wordpress

##创建网络,用于mysql和wordpress两个容器进行通信
docker network create blog
## docker启动mysql8.0,这里指定了数据库wordpress,网络使用blog,自动重启,
docker run \
--restart=always \
--name mysql8.0 \
-p 3307:3306 \
-v mysql_data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root \
-e MYSQL_DATABASE=wordpress \
--network blog \
--restart always \
-d mysql:8.0.22

##进入mysql容器:
docker exec -it mysql8.0 bash
#连接mysql,创建数据库wordpress
mysql -uroot -proot
create database wordpress default character set utf8mb4 default collate utf8mb4_general_ci;

##docker启动wordpress,指定了端口为8081,使用的mysql数据库容器名以及用户名密码,数据库,网络也使用blog
docker run -d -p 8081:80 \
-e WORDPRESS_DB_HOST=mysql8.0 \
-e WORDPRESS_DB_USER=root \
-e WORDPRESS_DB_PASSWORD=root \
-e WORDPRESS_DB_NAME=wordpress \
-v wordpress:/var/www/html/ \
--restart always --name wordpress \
--network blog \
wordpress:latest

14. docker compose 实现wordpress和mysql的容器启动

以下是一个有效的yaml,注意最下面,这里声明了卷的名称和网络名称,上面进行使用
volumes: mysql_data: wordpress: networks: blog:
以下为compose.yaml的内容:

name: myblog
services:
 mysql: 
  image: mysql:8.0.22
  container_name: mysql8.0
  ports:
   - "3307:3306"
  environment:
   - MYSQL_ROOT_PASSWORD=root
   - MYSQL_DATABASE=wordpress
  volumes:
   - mysql_data:/var/lib/mysql
   - /app/mysql/conf:/etc/mysql/conf.d
  restart: always
  networks:
   - blog
 wordpress:
   image: wordpress:latest
   container_name: wordpress
   ports:
    - "8081:80"
   environment:
    - WORDPRESS_DB_HOST=mysql8.0
    - WORDPRESS_DB_USER=root
    - WORDPRESS_DB_PASSWORD=root
    - WORDPRESS_DB_NAME=wordpress
   volumes:
    - wordpress:/var/www/html/
   restart: always
   networks:
    - blog
   depends_on:
    - mysql
volumes:
 mysql_data:
 wordpress: 
networks:
 blog: 

然后在compose.yaml目录执行docker compose -f compose.yaml up -d即可,-f compose.yaml可以不用,因为是默认名称

15. Dockerfile制作docker镜像,运行jar

首先准备jar,一个springboot的项目HelloWorld即可
准备Dockerfile
这里的8888为容器内部的端口,java.jar为上面准备的maven package打包的jar

FROM openjdk:8-jre-alpine
	LABEL desc="Hello"
	COPY java.jar /app/java.jar
	ENTRYPOINT ["java", "-jar", "/app/java.jar"]
	EXPOSE 8888

这里Dockerfile和java.jar在同一目录
根据Dockerfile构建镜像,这里-f指定Dockerfile,-t指定构建的镜像名称和tag,最后的.表明构建基于当前目录
docker build -f Dockerfile -t app:v1.0 .
最后docker运行镜像
docker run -d --name app -p 8888:8888 app:v1.0

16. compose.yaml安装各种中间件

先修改内存分页相关的东西

#Disable memory paging and swapping performance
sudo swapoff -a

# Edit the sysctl config file
sudo vi /etc/sysctl.conf

# Add a line to define the desired value
# or change the value if the key exists,
# and then save your changes.
vm.max_map_count=262144

# Reload the kernel parameters using sysctl
sudo sysctl -p

# Verify that the change was applied by checking the value
cat /proc/sys/vm/max_map_count

再修改compose.yaml,里面的KAFKA_CFG_ADVERTISED_LISTENERS的IP需要修改为自己的

name: devsoft
services:
  redis:
    image: bitnami/redis:latest
    restart: always
    container_name: redis
    environment:
      - REDIS_PASSWORD=123456
    ports:
      - '6379:6379'
    volumes:
      - redis-data:/bitnami/redis/data
      - redis-conf:/opt/bitnami/redis/mounted-etc
      - /etc/localtime:/etc/localtime:ro

  mysql:
    image: mysql:8.0.31
    restart: always
    container_name: mysql
    environment:
      - MYSQL_ROOT_PASSWORD=123456
    ports:
      - '3306:3306'
      - '33060:33060'
    volumes:
      - mysql-conf:/etc/mysql/conf.d
      - mysql-data:/var/lib/mysql
      - /etc/localtime:/etc/localtime:ro

  rabbit:
    image: rabbitmq:3-management
    restart: always
    container_name: rabbitmq
    ports:
      - "5672:5672"
      - "15672:15672"
    environment:
      - RABBITMQ_DEFAULT_USER=rabbit
      - RABBITMQ_DEFAULT_PASS=rabbit
      - RABBITMQ_DEFAULT_VHOST=dev
    volumes:
      - rabbit-data:/var/lib/rabbitmq
      - rabbit-app:/etc/rabbitmq
      - /etc/localtime:/etc/localtime:ro
  opensearch-node1:
    image: opensearchproject/opensearch:2.13.0
    container_name: opensearch-node1
    environment:
      - cluster.name=opensearch-cluster # Name the cluster
      - node.name=opensearch-node1 # Name the node that will run in this container
      - discovery.seed_hosts=opensearch-node1,opensearch-node2 # Nodes to look for when discovering the cluster
      - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligibile to serve as cluster manager
      - bootstrap.memory_lock=true # Disable JVM heap memory swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
      - "DISABLE_INSTALL_DEMO_CONFIG=true" # Prevents execution of bundled demo script which installs demo certificates and security configurations to OpenSearch
      - "DISABLE_SECURITY_PLUGIN=true" # Disables Security plugin
    ulimits:
      memlock:
        soft: -1 # Set memlock to unlimited (no soft or hard limit)
        hard: -1
      nofile:
        soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
        hard: 65536
    volumes:
      - opensearch-data1:/usr/share/opensearch/data # Creates volume called opensearch-data1 and mounts it to the container
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 9200:9200 # REST API
      - 9600:9600 # Performance Analyzer

  opensearch-node2:
    image: opensearchproject/opensearch:2.13.0
    container_name: opensearch-node2
    environment:
      - cluster.name=opensearch-cluster # Name the cluster
      - node.name=opensearch-node2 # Name the node that will run in this container
      - discovery.seed_hosts=opensearch-node1,opensearch-node2 # Nodes to look for when discovering the cluster
      - cluster.initial_cluster_manager_nodes=opensearch-node1,opensearch-node2 # Nodes eligibile to serve as cluster manager
      - bootstrap.memory_lock=true # Disable JVM heap memory swapping
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m" # Set min and max JVM heap sizes to at least 50% of system RAM
      - "DISABLE_INSTALL_DEMO_CONFIG=true" # Prevents execution of bundled demo script which installs demo certificates and security configurations to OpenSearch
      - "DISABLE_SECURITY_PLUGIN=true" # Disables Security plugin
    ulimits:
      memlock:
        soft: -1 # Set memlock to unlimited (no soft or hard limit)
        hard: -1
      nofile:
        soft: 65536 # Maximum number of open files for the opensearch user - set to at least 65536
        hard: 65536
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - opensearch-data2:/usr/share/opensearch/data # Creates volume called opensearch-data2 and mounts it to the container

  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:2.13.0
    container_name: opensearch-dashboards
    ports:
      - 5601:5601 # Map host port 5601 to container port 5601
    expose:
      - "5601" # Expose port 5601 for web access to OpenSearch Dashboards
    environment:
      - 'OPENSEARCH_HOSTS=["http://opensearch-node1:9200","http://opensearch-node2:9200"]'
      - "DISABLE_SECURITY_DASHBOARDS_PLUGIN=true" # disables security dashboards plugin in OpenSearch Dashboards
    volumes:
      - /etc/localtime:/etc/localtime:ro
  zookeeper:
    image: bitnami/zookeeper:3.9
    container_name: zookeeper
    restart: always
    ports:
      - "2181:2181"
    volumes:
      - "zookeeper_data:/bitnami"
      - /etc/localtime:/etc/localtime:ro
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes

  kafka:
    image: 'bitnami/kafka:3.4'
    container_name: kafka
    restart: always
    hostname: kafka
    ports:
      - '9092:9092'
      - '9094:9094'
    environment:
      - KAFKA_CFG_NODE_ID=0
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://0.0.0.0:9094
      - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://192.168.2.223:9094
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
      - ALLOW_PLAINTEXT_LISTENER=yes
      - "KAFKA_HEAP_OPTS=-Xmx512m -Xms512m"
    volumes:
      - kafka-conf:/bitnami/kafka/config
      - kafka-data:/bitnami/kafka/data
      - /etc/localtime:/etc/localtime:ro
  kafka-ui:
    container_name: kafka-ui
    image: provectuslabs/kafka-ui:latest
    restart: always
    ports:
      - 8080:8080
    environment:
      DYNAMIC_CONFIG_ENABLED: true
      KAFKA_CLUSTERS_0_NAME: kafka-dev
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
    volumes:
      - kafkaui-app:/etc/kafkaui
      - /etc/localtime:/etc/localtime:ro

  nacos:
    image: nacos/nacos-server:v2.3.1
    container_name: nacos
    ports:
      - 8848:8848
      - 9848:9848
    environment:
      - PREFER_HOST_MODE=hostname
      - MODE=standalone
      - JVM_XMX=512m
      - JVM_XMS=512m
      - SPRING_DATASOURCE_PLATFORM=mysql
      - MYSQL_SERVICE_HOST=nacos-mysql
      - MYSQL_SERVICE_DB_NAME=nacos_devtest
      - MYSQL_SERVICE_PORT=3306
      - MYSQL_SERVICE_USER=nacos
      - MYSQL_SERVICE_PASSWORD=nacos
      - MYSQL_SERVICE_DB_PARAM=characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true
      - NACOS_AUTH_IDENTITY_KEY=2222
      - NACOS_AUTH_IDENTITY_VALUE=2xxx
      - NACOS_AUTH_TOKEN=SecretKey012345678901234567890123456789012345678901234567890123456789
      - NACOS_AUTH_ENABLE=true
    volumes:
      - /app/nacos/standalone-logs/:/home/nacos/logs
      - /etc/localtime:/etc/localtime:ro
    depends_on:
      nacos-mysql:
        condition: service_healthy
  nacos-mysql:
    container_name: nacos-mysql
    build:
      context: .
      dockerfile_inline: |
        FROM mysql:8.0.31
        ADD https://raw.githubusercontent.com/alibaba/nacos/2.3.2/distribution/conf/mysql-schema.sql /docker-entrypoint-initdb.d/nacos-mysql.sql
        RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/nacos-mysql.sql
        EXPOSE 3306
        CMD ["mysqld", "--character-set-server=utf8mb4", "--collation-server=utf8mb4_unicode_ci"]
    image: nacos/mysql:8.0.30
    environment:
      - MYSQL_ROOT_PASSWORD=root
      - MYSQL_DATABASE=nacos_devtest
      - MYSQL_USER=nacos
      - MYSQL_PASSWORD=nacos
      - LANG=C.UTF-8
    volumes:
      - nacos-mysqldata:/var/lib/mysql
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "13306:3306"
    healthcheck:
      test: [ "CMD", "mysqladmin" ,"ping", "-h", "localhost" ]
      interval: 5s
      timeout: 10s
      retries: 10
  prometheus:
    image: prom/prometheus:v2.52.0
    container_name: prometheus
    restart: always
    ports:
      - 9090:9090
    volumes:
      - prometheus-data:/prometheus
      - prometheus-conf:/etc/prometheus
      - /etc/localtime:/etc/localtime:ro

  grafana:
    image: grafana/grafana:10.4.2
    container_name: grafana
    restart: always
    ports:
      - 3000:3000
    volumes:
      - grafana-data:/var/lib/grafana
      - /etc/localtime:/etc/localtime:ro

volumes:
  redis-data:
  redis-conf:
  mysql-conf:
  mysql-data:
  rabbit-data:
  rabbit-app:
  opensearch-data1:
  opensearch-data2:
  nacos-mysqldata:
  zookeeper_data:
  kafka-conf:
  kafka-data:
  kafkaui-app:
  prometheus-data:
  prometheus-conf:
  grafana-data:

如有侵权,麻烦联系删除。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值