k8s集群修改节点和master的hostname之后需要如何调整(踩坑之旅)

我把k8s集群master和node的hostname全部修改后发现kube ndoes 还是原来的样子

[root@k8s-master1 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
k8s-master-m1   Ready    master   13h   v1.15.1
k8s-node-n1     Ready    <none>   13h   v1.15.1
k8s-node-n2     Ready    <none>   13h   v1.15.1

当前的hostname已经经过下面的修改
k8s-master-m1->k8s-master1
k8s-node-n1->k8s-node1
k8s-node-n2->k8s-node2

怎么修改kube nodes的hostname呢?
1 台 master 加入集群后发现忘了修改主机名,而在 k8s 集群中修改节点主机名非常麻烦,不如将 master 退出集群改名并重新加入集群
接下来我试着先把nodes删除

[root@k8s-master1 ~]# kubectl delete node k8s-node-n1
node "k8s-node-n1" deleted
[root@k8s-master1 ~]# kubectl delete node k8s-node-n2
node "k8s-node-n2" deleted
[root@k8s-master1 ~]# kubectl delete node k8s-master-m1
node "k8s-master-m1" deleted
[root@k8s-master1 ~]# kubectl get nodes
No resources found.

全部删除了
查看csr

[root@k8s-master1 ~]# kubectl get csr
The connection to the server 192.168.32.29:6443 was refused - did you specify the right host or port?

开始提示host不正确了
执行kubeadm reset命令清除集群所有的配置

[root@k8s-master1 ~]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "k8s-master1" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0524 13:31:14.677131  127888 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints
.Please manually remove this etcd member using etcdctl
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

提示不能清除kubeconfig files, 需要我手动去删除
直接手动删除

rm -rf $HOME/.kube/config

再重新reset,还是会遇到以下提示

[root@k8s-master1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0524 13:41:55.276661  128746 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

reset iptables, 再reset kubeconfig, 还是没有解决问题

[root@k8s-master1 ~]# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
[root@k8s-master1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0524 13:45:44.825195  128954 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

尝试按照提示先clear system’s IPVS tables,再reset集群配置,还是失败了

[root@k8s-master1 ~]# ipvsadm --clear
[root@k8s-master1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0524 13:50:29.415380  129229 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

只好放大招了!!!
修改kubeadm-config.yaml里面的nodeRegistration的name为master的新hostname先

nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master

然后执行如下命令,重新初始化主节点和部署

kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

然后再执行以下命令,发现master节点出来了,并且用上了新的hostname

[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
k8s-master1   NotReady   master   53s   v1.15.1

重新部署kube flannel网络插件 — 只需要在主节点执行, master节点就ready了

[root@k8s-master1 ~]# kubectl create -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    master   8m15s   v1.15.1

部署完毕再查询pod,发现一切运行良好

[root@k8s-master1 ~]# kubectl get pod -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-jbhww              1/1     Running   0          11m
coredns-5c98db65d4-jdqtn              1/1     Running   0          11m
etcd-k8s-master1                      1/1     Running   0          10m
kube-apiserver-k8s-master1            1/1     Running   0          10m
kube-controller-manager-k8s-master1   1/1     Running   0          10m
kube-flannel-ds-amd64-j4bp2           1/1     Running   0          4m4s
kube-proxy-svb9k                      1/1     Running   0          11m
kube-scheduler-k8s-master1            1/1     Running   0          10m

用node节点join新的 master

[root@k8s-node2 ~]# kubeadm join 192.168.32.29:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:799a1d11efdd0c092b8e3226e8d1c58f0ecaf3830e7cf587a13b0c4251fa7343
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.9. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-bootstrap-kubelet.conf]: /etc/kubernetes/bootstrap-kubelet.conf already exists
	[ERROR Port-10250]: Port 10250 is in use
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

出错了,因为之前已经已经join过,并且留下了config files,接下来删除旧的config files,并kill掉占用端口的进程

[root@k8s-node2 ~]# netstat -lnp|grep 10250
tcp6       0      0 :::10250                :::*                    LISTEN      8674/kubelet        
[root@k8s-node2 ~]# kill -9 8674
[root@k8s-node2 ~]# rm -rf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf etc/kubernetes/pki/ca.crt

执行kubeadm reset命令清除当前节点所有的配置

[root@k8s-node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0524 15:40:46.463906   97562 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually.
For example:
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

这里又报了跟master一样的错,直接清除$HOME/.kube/config file

rm -rf $HOME/.kube/config

然后重新join master

[root@k8s-node2 ~]# kubeadm join 192.168.32.29:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:799a1d11efdd0c092b8e3226e8d1c58f0ecaf3830e7cf587a13b0c4251fa7343
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.9. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

到这里node重新join master已经完成了,其他node的操作同理。
全部完成后,我们重新回到master看看集群的状态

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
k8s-master1   Ready    master   114m    v1.15.1
k8s-node1     Ready    <none>   10m     v1.15.1
k8s-node2     Ready    <none>   6m54s   v1.15.1

再看csr

[root@k8s-master1 ~]# kubectl get csr
NAME        AGE   REQUESTOR                 CONDITION
csr-l7dnl   39m   system:bootstrap:abcdef   Approved,Issued
csr-m5bwx   42m   system:bootstrap:abcdef   Approved,Issued

至此,更改所有hostname的踩坑之旅完成了。

  • 3
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
一、安装准备: 1.环境 主机名 IP k8s-master 192.168.250.111 k8s-node01 192.168.250.112 k8s-node02 192.168.250.116 2.设置主机名 hostnamectl --static set-hostname k8s-master hostnamectl --static set-hostname k8s-node01 hostnamectl --static set-hostname k8s-node02 3.关闭防火墙和selinux systemctl disable firewalld systemctl stop firewalld sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 执行完成后重启虚拟机。 4.在master机器上安装ansible 执行命令:sudo yum install ansible (离线处理补充) 5.配置 ansible ssh密钥登录,此操作需要在所有机器上执行 ssh-keygen -t rsa -b 2048 回车 回车 回车 ssh-copy-id $IP #$IP为所有虚拟机,按照提示输入yes 和root密码 (密钥补充) 二、安装kubernetes集群 进入ansible安装路径 : cd /etc/ansible 将路径下的roles文件夹和hosts文件删除。 解压压缩文件kubeasz.zip文件,将解压后的内容放入当前目录下(/etc/ansible) 根据搭建集群环境要求,进入/etc/ansible/example 目录下选取 hosts.allinone.example 单节点AllInOne hosts.m-masters.example 单主多节点 hosts.s-master.example 多主多节点 红色标记的是需要自行修改的地方 修改完成后将文件名改为hosts 放入/etc/ansible/目录下。 安装prepare ansible-playbook 01.prepare.yml 安装etcd ansible-playbook 02.etcd.yml 安装kubectl命令 ansible-playbook 03.kubectl.yml 安装docker ansible-playbook 04.docker.yml 如果执行时出现报错: 可忽略。 解决方法: 在master节点上执行:curl -s -S "https://registry.hub.docker.com/v2/repositories/$@/tags/" | jq '."results"[]["name"]' |sort 所有机器上执行: wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm rpm -ivh epel-release-latest-7.noarch.rpm yum install jq -y 在重新执行: ansible-playbook 04.docker.yml 安装calico ansible-playbook 05.calico.yml 部署master节点 ansible-playbook 06.kube-master.yml 加入node节点 ansible-playbook 07.kube-node.yml 如果执行成功,k8s集群就安装好了。 三、验证安装 如果提示kubectl: command not found,退出重新ssh登陆一下,环境变量生效即可 kubectl version #查看kubernetes版本 kubectl get componentstatus # 可以看到scheduler/controller-manager/etcd等组件 Healthy kubectl cluster-info # 可以看到kubernetes master(apiserver)组件 running kubectl get node # 可以看到单 node Ready状态 kubectl get pod --all-namespaces # 可以查看所有集群pod状态 kubectl get svc --all-namespaces # 可以查看所有集群服务状态 calicoctl node status # 可以在master或者node节点上查看calico网络状态 四、安装主要组件 安装kubedns kubectl create -f manifests/kubedns 安装heapster kubectl create -f manifests/heapster 安装dashboard kubectl create -f manifests/dashboard 访问dashboard 先执行命令查看dashboard的NodePort 端口 kubectl get svc -n kube-system 访问web页面 https://masterIP: 7443 选择令牌按钮 ,用命令查询登录令牌 之前安装过 heapster 执行命令:kubectl get secret -n kube-system 查询 heapster-token-twpw4 的详细内容 执行命令:kubectl describe secret heapster-token-twpw4 -n kube-system Token就是登录令牌,复制登录就好了 安装ingress kubectl create -f manifests/ingress/ 安装EFK(elasticsearch+ fluentd + kibana) 首先进入 manifests/EFK 文件夹下 (cd /etc/ansible/manifests/EFK) 查看并修改 ceph-sercet.yaml 文件。 此key是 ceph存储用户的token ,将此key转换为base64 将文件中红色选选中部分修改为转换后的修改完成后 部署 pv 和 pvc 执行命令:kubectl create -f es-pv-data.yaml kubectl create -f es-pvc-data.yaml 部署fluentd 执行命令:kubectl create -f fluentd-rbac.yml -f fluentd-configmap.yml -f fluentd-daemonset.yml 部署elasticsearch 先设置node节点中role ,指定master client data 部署位置 执行命令:kubectl get nodes kubectl label node 10.2.0.244 role=master (10.2.0.244 是我本机kubernetesmaster节点 ,所以我也将此master也部署在这里) 其余的两个节点分别是data 和 client 执行命令:kubectl create -f es-discovery-svc.yaml -f es-svc.yaml -f es-master.yaml -f es-client.yaml -f es-data.yaml 其中部署elasticsearch集群需要注意一些事项 : Master节点一般只有一个 并且提供9300 端口 客户端通讯使用 Client 节点一般提供9200端口 用于连接kibana 和 fluentd http访问使用 Data 节点是提供数据存储,持久化对data节点进行就可以。 其中 master , client , data 部署文件中 配置的 CLUSTER_NAME 指的是 elasticsearch集群名称 Java运行自行设置,最大和最小需要一致。 最小为-Xms256m 部署kibana 执行命令:kubectl create -f kibana-svc.yaml -f kibana.yaml 这里需要注意 kibana.yaml 文件中 参数的设置 这里的CLUSTER_NAME 也是elasticsearch部署文件中设置的集群名称。 #安装 flannel 执行命令: cd /etc/ansible/roles/flannel 先修改kube-flannel.yml文件 --iface 对应的是本机的网卡名称 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth1" ] 修改完成后 执行: kubectl create -f kube-flannel-rbac.yml kubectl apply -f kube-flannel.yml
可以使用 Ansible 的 time 模块来计算集群节点之间的时间差。 首先,在 Ansible 控制节点上创建一个 playbook 文件,比如叫做 `time_diff.yml`,内容如下: ```yaml - hosts: k8s_nodes gather_facts: false tasks: - name: Get current time set_fact: current_time: "{{ ansible_date_time.epoch }}" - name: Get remote time shell: date +%s register: remote_time - name: Calculate time difference set_fact: time_diff: "{{ current_time - remote_time.stdout|int }}" - name: Print time difference debug: msg: "Time difference with {{ inventory_hostname }} is {{ time_diff }} seconds." ``` 在这个 playbook 中,我们首先使用 `set_fact` 模块获取当前时间,并将其保存在 `current_time` 变量中。然后,使用 `shell` 模块在远程节点上执行 `date +%s` 命令,获取远程节点的当前时间,并将其保存在 `remote_time` 变量中。接着,使用 `set_fact` 模块计算时间差,并将其保存在 `time_diff` 变量中。最后,使用 `debug` 模块打印时间差。 注意,这个 playbook 需要k8s_nodes 组内的所有节点上执行,因此需要在 inventory 文件中定义这个组,比如: ``` [k8s_nodes] node1 ansible_host=192.168.1.101 node2 ansible_host=192.168.1.102 node3 ansible_host=192.168.1.103 ``` 然后执行 playbook: ```bash ansible-playbook -i inventory.txt time_diff.yml ``` 执行完后,你应该可以看到类似于这样的输出: ``` ok: [node1] => { "msg": "Time difference with node1 is 0 seconds." } ok: [node2] => { "msg": "Time difference with node2 is 2 seconds." } ok: [node3] => { "msg": "Time difference with node3 is -1 seconds." } ``` 这表示在 node1 上执行 playbook 时,node1 的时间与当前时间相同,因此时间差为 0。在 node2 上执行 playbook 时,node2 的时间比当前时间慢了 2 秒,因此时间差为 2。在 node3 上执行 playbook 时,node3 的时间比当前时间快了 1 秒,因此时间差为 -1。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

IT三明治

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值