docker2运维

1.docker

1.1使用国内镜像加速docker

(1)配置阿里云镜像加速器

sudo mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ncfqcczm.mirror.aliyuncs.com"]
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

(2)检查镜像加速器是否正常运行

docker info

2.1:(主动检查)显示以下信息证明成功加载:(只要存在即装配成功)

Registry Mirrors:
  https://ncfqcczm.mirror.aliyuncs.com/

2.2:(被动观察)

​ -ps:下载速度明显很慢

1.2 在该网址上检索镜像版本号

https://hub.docker.com/search?q=&type=image

2 docker镜像

2.1rabbitmq

  1. 启动容器
docker run --hostname my-rabbit --name rabbitmq -p 15671:15672 -p 5671:5672 --restart always -d rabbitmq
  1. 进入容器目录
docker exec -it 容器id /bin/bash
  1. 安装插件
rabbitmq-plugins enable rabbitmq_management

2.2reids

1.启动redis

docker run -p 6378:6379 --name redis --restart always -d redis

参数解释:

-p 端口映射 :左边为映射外部端口(实际访问端口) 右边为该程序实际端口

–name 指定名称: 此名称为可以操作镜像的名称 区别于-d后面的名称为显示的名称

–restart :指定docker容器重启/启动时容器的加载状态:always为永远跟随docker容器启动而启动

-d:后台运行 即挂载到后台运行

##2.3安装vim插件

apt-get update
apt-get install vim

2.3mysql

启动命令进行变更:

docker run --name mysql -p 3305:3306 -e MYSQL_ROOT_PASSWORD=root  -v /home/config/mysql:/docker --restart always -d mysql:8 --lower_case_table_names=1

介绍:
-v 配置文件映射 冒号之前的文件夹会和冒号之后的文件夹进行同步
解决only_full_group_by问题

  1. 如果存在未进行配置映射的容器,需要先停止并删除容器
  2. 复制容器中的配置文件到指定的位置
# 进入容器
docker exec -it mysql bash
# 将容器里的mysql的配置文件copy 到 /docker/mysqld.cnf目录下
cp /etc/mysql/my.cnf /docker/my.cnf
  1. 在配置映射的文件夹中进行配置的修改,在最后一行加入该语句
sql_mode="STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_ENGINE_SUBSTITUTION"
  1. 覆盖该文件到docker容器中
cp /docker/my.cnf /etc/mysql/my.cnf
  1. 重启容器
docker restart [容器名称]

2.4 nexus

docker run --name nexus -p 8088:8081 --restart always -d sonatype/nexus3 

2.4.1获取密码

#进入到容器目录当中
docker exec -it [容器id/容器名称] /bin/bash
#cat密码
cat /nexus-data/admin.password
  • 默认登陆用户名是admin
  • 默认登陆密码是上面cat的密码

2.4.2 配置代理仓库

#查看nexus是否正确启动
docker logs -f [容器id/容器名称]
  1. 浏览器访问地址:http://[ip地址]:[配置的端口号]

  2. 通过网页进行访问:创建 maven2(proxy)仓库

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-NegqovGw-1668948682671)(linux运维指南.assets/image-20221120161543765.png)]

  1. 配置阿里云镜像仓库地址:https://maven.aliyun.com/nexus/content/groups/public
  2. 配置优先级:点开公用的仓库组 maven-public
  3. 使阿里云仓库优先级高于 maven中央仓库的优先级

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-EFyr3zXN-1668948682672)(linux运维指南.assets/image-20221120162244496.png)]

2.4.3配置maven config

  • 说明:http://[ip]:[端口号]/repository/ (固定的头定位到仓库位置)
  • maven-public 公用的组(这个组就包含了我们配置的那几个仓库)
  • hosted 宿主主机 :用于存储本地的jar
  • proxy 代理主机 : 用于代理仓库如阿里云仓库
  • group 仓库组: 仓库组不会实际生产数据 而是宿主主机和代理主机的集合 方便管理仓库
  • 操作动作:上传组件包括手动上传和自动上传,对于中央仓库管理的jar采用自动上传,对于第三方jar和自己生产的jar使用手动上传
  • 配置动作: 对于需要下载的jar的配置,对于需要手动上传的jar的配置,对于赋予权限的配置
  • 配置范围: maven本地配置 项目内部配置
  • 对应关系: 其实所有id 对应的都是授权 即与servers里面的id建立关联关系
<!--本地访问的仓库路径-->
<localRepository>E:\repostiry\maven-ku</localRepository>

<!--在配置文件中修改镜像 :  url对应的是配置的仓库组的url
	所有镜像访问都是通过该仓库组进行映射 不论是第三方jar还是 中央仓库镜像
	当我们去访问这个仓库组的时候 这个仓库组会寻找所有的仓库进行轮询访问来获取镜像
	配置多个镜像 它会从上往下进行寻找,如果仓库对应的是中央仓库则会直接从阿里云仓库中进行寻找,
	找不到的话就会访问我们设置的仓库组
-->
<mirrors>
      
    <mirror>
        <id>nexus-aliyun</id>
        <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
        <mirrorOf>central</mirrorOf>
    </mirror>
    
    <mirror>
        <id>nexus-public</id>
        <url>http://192.168.113.66:8088/repository/xcong-group/</url>
        <mirrorOf>*</mirrorOf>
    </mirror>
    
</mirrors>

<!--配置访问服务器的权限 
	xcong-release : 自己创建的服务器 主要存储我们自己项目生产的正式jar包
	xcong-snapshot : 主要存储我们生产的快照jar包
用户名/密码-->

<servers>
    
	<server>
      <id>xcong-release</id>
      <username>admin</username>
      <password>admin</password>
    </server>
    
    <server>
      <id>xcong-snapshot</id>
      <username>admin</username>
      <password>admin</password>
    </server>
    
    <server>
        <id>nexus-public</id>
        <username>admin</username>
     	<password>admin</password>
    </server>
    
</servers>

2.4.4 配置pom.xml

pom.xml 和 maven config 是对应的关系

<!--发布配置管理-->
<distributionManagement>
   
	<repository>
         <!--对应关系 repository-id -> servers-server-id -->
    	<id>xcong-release</id>
        <!--对应关系 url 是我们要上传的某个仓库的url -->
        <url>http://192.168.113.66:8088/repository/xcong-release/</url>
    </repository>
    
    <repository>
         <!--对应关系 repository-id -> servers-server-id -->
    	<id>xcong-snapshot</id>
        <!--对应关系 url 是我们要上传的某个仓库的url -->
        <url>http://192.168.113.66:8088/repository/xcong-snapshot/</url>
    </repository>
</distributionManagement>

2.4.5发布项目和手动上传

# 我们在pom.xml中配置如上的配置之后既可以发布项目到指定的仓库当中
mvn deploy

手动上传:

  1. 打开nexus的管理页面
  2. 选择指定的仓库并上传本地的jar

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-0ZHro5HE-1668948682672)(linux运维指南.assets/image-20221120174441103.png)]

  1. 写入相关的信息

stributionManagement>


### 2.4.5发布项目和手动上传

```cmd
# 我们在pom.xml中配置如上的配置之后既可以发布项目到指定的仓库当中
mvn deploy

手动上传:

  1. 打开nexus的管理页面
  2. 选择指定的仓库并上传本地的jar

[外链图片转存中…(img-0ZHro5HE-1668948682672)]

  1. 写入相关的信息

2.5 docker 启动jar

  1. 首先创建文件夹用于存储java文件
/home/server/deploy/jar
  1. 拉取jdk镜像并启动容器
# 拉取镜像
docker pull 
#启动容器
docker run -d --restart=always -v /home/server/deploy/jar:/jar -v /home/server/logs/demo:/mnt/logs/demo -p 11093:11093 --name demo kdvolder/jdk8 /usr/bin/java -jar -Duser.timezone=GMT+08 /jar/demo-2.0.0-SNAPSHOT.jar 

参数说明:
-d: 后台启动
–restart: 是否跟随docker容器的启动而启动
-v: 配置路径映射
-p:配置端口映射
–name:指定容器启动的名称
/usr/bin/java -jar 用于启动jar包
-Duser.timezone:用于指定时区
/jar/demo-2.0.0-SNAPSHOT.jar 指定jar包所在的位置
:jar 为 /home/server/deploy/jar 映射的目录

2.6 nacos

2.6.1. 确定版本号

这里的版本是:2.0.3(比这个2.0版本低的可以参考其他教程)

2.6.2. 拉取镜像 (默认拉取最新版本)

docker pull nacos/nacos-server

2.6.3. 先在本地目录下写入nacos的配置然后再启动nacos

#示例:新建一个目录到home目录下,新建配置application.properties
mkdir /home/nacos
cd /home/nacos
vim application.properties

然后打开此目录,将以下配置写入到application.properties文件中
注意可能导致数据库连接异常的问题:
1.此处的配置已经做了处理,增大了连接超时时间
2.可以用navicat连接数据库进行查询nacos数据库的某张表,唤醒数据库

#
# Copyright 1999-2021 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#*************** Spring Boot Related Configurations ***************#
### Default web context path:
server.servlet.contextPath=/nacos
### Default web server port:
server.port=8848

#*************** Network Related Configurations ***************#
### If prefer hostname over ip for Nacos server addresses in cluster.conf:
# nacos.inetutils.prefer-hostname-over-ip=false

### Specify local server's IP:
# nacos.inetutils.ip-address=


#*************** Config Module Related Configurations ***************#
### If use MySQL as datasource:
spring.datasource.platform=mysql

### Count of DB:
db.num=1

### Connect URL of DB:
db.url.0=jdbc:mysql://192.168.113.66:3306/nacos?characterEncoding=utf8&connectTimeout=10000&socketTimeout=30000&autoReconnect=true&useUnicode=true&useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC
db.user.0=root
db.password.0=root

### Connection pool configuration: hikariCP
db.pool.config.connectionTimeout=30000
db.pool.config.validationTimeout=10000
db.pool.config.maximumPoolSize=20
db.pool.config.minimumIdle=2

#*************** Naming Module Related Configurations ***************#
### Data dispatch task execution period in milliseconds: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.delayMs
# nacos.naming.distro.taskDispatchPeriod=200

### Data count of batch sync task: Will removed on v2.1.X. Deprecated
# nacos.naming.distro.batchSyncKeyCount=1000

### Retry delay in milliseconds if sync task failed: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.retryDelayMs
# nacos.naming.distro.syncRetryDelay=5000

### If enable data warmup. If set to false, the server would accept request without local data preparation:
# nacos.naming.data.warmup=true

### If enable the instance auto expiration, kind like of health check of instance:
# nacos.naming.expireInstance=true

### will be removed and replaced by `nacos.naming.clean` properties
nacos.naming.empty-service.auto-clean=true
nacos.naming.empty-service.clean.initial-delay-ms=50000
nacos.naming.empty-service.clean.period-time-ms=30000

### Add in 2.0.0
### The interval to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.interval=60000

### The expired time to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.expired-time=60000

### The interval to clean expired metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.interval=5000

### The expired time to clean metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.expired-time=60000

### The delay time before push task to execute from service changed, unit: milliseconds.
# nacos.naming.push.pushTaskDelay=500

### The timeout for push task execute, unit: milliseconds.
# nacos.naming.push.pushTaskTimeout=5000

### The delay time for retrying failed push task, unit: milliseconds.
# nacos.naming.push.pushTaskRetryDelay=1000

### Since 2.0.3
### The expired time for inactive client, unit: milliseconds.
# nacos.naming.client.expired.time=180000

#*************** CMDB Module Related Configurations ***************#
### The interval to dump external CMDB in seconds:
# nacos.cmdb.dumpTaskInterval=3600

### The interval of polling data change event in seconds:
# nacos.cmdb.eventTaskInterval=10

### The interval of loading labels in seconds:
# nacos.cmdb.labelTaskInterval=300

### If turn on data loading task:
# nacos.cmdb.loadDataAtStart=false


#*************** Metrics Related Configurations ***************#
### Metrics for prometheus
#management.endpoints.web.exposure.include=*

### Metrics for elastic search
management.metrics.export.elastic.enabled=false
#management.metrics.export.elastic.host=http://localhost:9200

### Metrics for influx
management.metrics.export.influx.enabled=false
#management.metrics.export.influx.db=springboot
#management.metrics.export.influx.uri=http://localhost:8086
#management.metrics.export.influx.auto-create-db=true
#management.metrics.export.influx.consistency=one
#management.metrics.export.influx.compressed=true

#*************** Access Log Related Configurations ***************#
### If turn on the access log:
server.tomcat.accesslog.enabled=true

### The access log pattern:
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i


### 如果提示tomcat报错 请修改这个配置 默认空
### The directory of access log:
server.tomcat.basedir=./

#*************** Access Control Related Configurations ***************#
### If enable spring security, this option is deprecated in 1.2.0:
#spring.security.enabled=false

### The ignore urls of auth, is deprecated in 1.2.0:
nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**

### The auth system to use, currently only 'nacos' and 'ldap' is supported:
nacos.core.auth.system.type=nacos

### If turn on auth system:
nacos.core.auth.enabled=false

### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username
# nacos.core.auth.ldap.url=ldap://localhost:389
# nacos.core.auth.ldap.userdn=cn={0},ou=user,dc=company,dc=com

### The token expiration in seconds:
nacos.core.auth.default.token.expire.seconds=18000

### The default token:
nacos.core.auth.default.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789

### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
nacos.core.auth.caching.enabled=true

### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version.
nacos.core.auth.enable.userAgentAuthWhite=false

### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false.
### The two properties is the white list for auth and used by identity the request from other server.
nacos.core.auth.server.identity.key=serverIdentity
nacos.core.auth.server.identity.value=security

#*************** Istio Related Configurations ***************#
### If turn on the MCP server:
nacos.istio.mcp.server.enabled=false

#*************** Core Related Configurations ***************#

### set the WorkerID manually
# nacos.core.snowflake.worker-id=

### Member-MetaData
# nacos.core.member.meta.site=
# nacos.core.member.meta.adweight=
# nacos.core.member.meta.weight=

### MemberLookup
### Addressing pattern category, If set, the priority is highest
# nacos.core.member.lookup.type=[file,address-server]
## Set the cluster list with a configuration file or command-line argument
# nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809
## for AddressServerMemberLookup
# Maximum number of retries to query the address server upon initialization
# nacos.core.address-server.retry=5
## Server domain name address of [address-server] mode
# address.server.domain=jmenv.tbsite.net
## Server port of [address-server] mode
# address.server.port=8080
## Request address of [address-server] mode
# address.server.url=/nacos/serverlist

#*************** JRaft Related Configurations ***************#

### Sets the Raft cluster election timeout, default value is 5 second
# nacos.core.protocol.raft.data.election_timeout_ms=5000
### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute
# nacos.core.protocol.raft.data.snapshot_interval_secs=30
### raft internal worker threads
# nacos.core.protocol.raft.data.core_thread_num=8
### Number of threads required for raft business request processing
# nacos.core.protocol.raft.data.cli_service_thread_num=4
### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat
# nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe
### rpc request timeout, default 5 seconds
# nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000

#*************** Distro Related Configurations ***************#

### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second.
# nacos.core.protocol.distro.data.sync.delayMs=1000

### Distro data sync timeout for one sync data, default 3 seconds.
# nacos.core.protocol.distro.data.sync.timeoutMs=3000

### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds.
# nacos.core.protocol.distro.data.sync.retryDelayMs=3000

### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds.
# nacos.core.protocol.distro.data.verify.intervalMs=5000

### Distro data verify timeout for one verify, default 3 seconds.
# nacos.core.protocol.distro.data.verify.timeoutMs=3000

### Distro data load retry delay when load snapshot data failed, default 30 seconds.
# nacos.core.protocol.distro.data.load.retryDelayMs=30000

2.6.4. 启动nacos容器(直接copy即可运行)

docker run \
--name nacos \
-d \
-p 8848:8848 \
-p 9848:9848 \
-p 9849:9849 \
--privileged=true \
-v /home/nacos/application.properties:/home/nacos/conf/application.properties \
--restart=always \
-e MODE=standalone \
-e PREFER_HOST_MODE=hostname \
nacos/nacos-server

说明:
(1). \ 换行符
(2). -v 冒号左边是本地目录(便于修改),右边是容器内目录(没有真实路径)通过-v关联之后修改本地目录的文件,容器内目录的文件会同步进行修改。
(3).为什么要多启动两个端口,请参考官方文档给出的解释此刻不做赘述,如果不启动,则会报错

2.6.5登陆nacos

http://[ip]:8848/ncaos
username: nacos
password: nacos

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值