bash 脚本部署lmnp
我们的某些应用程序托管在Kubernetes集群中,我们使用GitLab持续集成(CI)来自动化部署,并使用Helm 2来部署我们的应用程序。 使用Helm图表可以存储Kubernetes对象YAML文件的模板,这些模板具有可以在部署期间使用图表时通过传递的命令行参数以编程方式设置的变量。 这使我们可以将关键机密存储在受GitLab保护的环境变量中或Hashicorp Vault中,并在CI部署作业中使用它们。
我们的部署作业使用Bash脚本来运行部署过程 。 此Bash脚本提供了许多对CI / CD环境中有用的功能:
- 它有助于在CI / CD环境之外使用。 GitLab CI和其他CI系统将作业步骤存储为CI文本文件(例如.gitlab-ci.yml)的“脚本”部分中的可执行外壳代码行。 虽然这对于确保可以存储基本的可执行步骤而没有外部依赖关系很有用,但它可以防止开发人员在测试或手动部署方案中使用相同的代码。 另外,在这些脚本部分中无法轻松使用Bash系统的许多高级功能。
- 它有助于对重要部署过程进行单元测试。 没有CI系统提供测试部署逻辑是否按预期执行的方法。 精心构建的Bash脚本可以使用BATS进行单元测试 。
- 它促进了脚本中各个功能的重用。 最后一部分使用保护子句, 如果[[“ $ {BASH_SOURCE [0]}” ==“ $ {0}”]] ,则可以防止在不执行脚本时调用run_main函数。 这允许脚本的来源,然后允许用户使用其中的许多有用的单独功能。 这对于正确的BATS测试至关重要。
- 它使用环境变量来保护敏感信息,并使脚本可在许多项目和项目应用程序环境中重用。 当由GitLab CI运行程序运行时,GitLab CI使许多这些环境变量可用。 必须在GitLab CI外部使用脚本之前手动设置这些设置。
该脚本执行将应用程序的Helm图表部署到Kubernetes所需的所有任务,并等待使用kubectl和Helm完成部署。 Helm使用本地Tiller安装运行,而不是在Kubernetes集群中运行Tiller。 Kubernetes HELM_USER和HELM_PASSWORD用于登录Kubernetes CLUSTER_SERVER和PROJECT_NAMESPACE 。 启动分iller器,在仅客户端模式下初始化Helm,并更新其回购。 模板用Helm衬里,以确保不会意外发生语法错误。 然后使用helm upgrade --install以声明模式部署模板。 Helm使用--wait标志等待部署准备就绪。
PROJECT_SPECIFIC_DEPLOY_ARGS环境变量。 在脚本执行的早期检查部署中所需的所有环境变量,如果缺少任何脚本,则脚本以非零退出状态退出。此脚本已在多个GitLab CI托管项目中使用。 它帮助我们专注于代码,而不是每个项目中的部署逻辑。
剧本
#!/bin/bash
# MIT License
#
# Copyright (c) 2019 Darin London
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
log_level_for
(
)
{
case
" ${1} "
in
"error"
)
echo
1
;;
"warn"
)
echo
2
;;
"debug"
)
echo
3
;;
"info"
)
echo
4
;;
*
)
echo
-1
;;
esac
}
current_log_level
(
)
{
log_level_for
" ${LOG_LEVEL} "
}
error
(
)
{
[ $
( log_level_for
"error"
)
-le $
( current_log_level
)
]
&&
echo
" ${1} "
>&
2
}
warn
(
)
{
[ $
( log_level_for
"warn"
)
-le $
( current_log_level
)
]
&&
echo
" ${1} "
>&
2
}
debug
(
)
{
[ $
( log_level_for
"debug"
)
-le $
( current_log_level
)
]
&&
echo
" ${1} "
>&
2
}
info
(
)
{
[ $
( log_level_for
"info"
)
-le $
( current_log_level
)
]
&&
echo
" ${1} "
>&
2
}
check_required_environment
(
)
{
local
required_env =
" ${1} "
for reqvar
in
$required_env
do
if
[
-z
" ${!reqvar} "
]
then
error
"missing ENVIRONMENT ${reqvar} !"
return
1
fi
done
}
check_default_environment
(
)
{
local
required_env =
" ${1} "
for varpair
in
$required_env
do
local
manual_environment =$
(
echo
" ${varpair} "
|
cut
-d
':' -f1
)
local
default_if_not_set =$
(
echo
" ${varpair} "
|
cut
-d
':' -f2
)
if
[
-z
" ${!manual_environment} "
]
&&
[
-z
" ${!default_if_not_set} "
]
then
error
"missing default ENVIRONMENT, set ${manual_environment} or ${default_if_not_set} !"
return
1
fi
done
}
dry_run
(
)
{
[
${DRY_RUN}
]
&& info
"skipping for dry run"
&&
return
return
1
}
init_tiller
(
)
{
info
"initializing local tiller"
dry_run
&&
return
export
TILLER_NAMESPACE =
$PROJECT_NAMESPACE
export
HELM_HOST =localhost:
44134
# https://rimusz.net/tillerless-helm/
# run tiller locally instead of in the cluster
tiller
--storage =secret
&
export
TILLER_PID =
$!
sleep
1
kill
-0
${TILLER_PID}
if
[
$?
-gt
0
]
then
error
"tiller not running!"
return
1
fi
}
init_helm
(
)
{
info
"initializing helm"
dry_run
&&
return
helm init
--client-only
if
[
$?
-gt
0
]
then
error
"could not initialize helm"
return
1
fi
}
init_helm_with_tiller
(
)
{
init_tiller
||
return
1
init_helm
||
return
1
info
"updating helm client repository information"
dry_run
&&
return
helm repo update
if
[
$?
-gt
0
]
then
error
"could not update helm repository information"
return
1
fi
}
decommission_tiller
(
)
{
if
[
-n
" ${TILLER_PID} "
]
then
kill
${TILLER_PID}
if
[
$?
-gt
0
]
then
return
fi
fi
}
check_required_deploy_arg_environment
(
)
{
[
-z
" ${PROJECT_SPECIFIC_DEPLOY_ARGS} "
]
&&
return
for reqvar
in
${PROJECT_SPECIFIC_DEPLOY_ARGS}
do
if
[
-z
${!reqvar}
]
then
error
"missing Deployment ENVIRONMENT ${reqvar} required!"
return
1
fi
done
}
project_specific_deploy_args
(
)
{
[
-z
" ${PROJECT_SPECIFIC_DEPLOY_ARGS} "
]
&&
echo
""
&&
return
extraArgs =
''
for deploy_arg_key
in
${PROJECT_SPECIFIC_DEPLOY_ARGS}
do
extraArgs =
" ${extraArgs} --set $(echo "${deploy_arg_key}" | sed 's/__/\./g' | tr '[:upper:]' '[:lower:]') = ${!deploy_arg_key} "
done
echo
" ${extraArgs} "
}
check_required_cluster_login_environment
(
)
{
check_required_environment
"HELM_TOKEN HELM_USER PROJECT_NAMESPACE CLUSTER_SERVER"
||
return
1
}
cluster_login
(
)
{
info
"authenticating ${HELM_USER} in ${PROJECT_NAMESPACE} "
dry_run
&&
return
kubectl config set-cluster ci_kube
--server =
" ${CLUSTER_SERVER} "
||
return
1
kubectl config set-credentials
" ${HELM_USER} "
--token =
" ${HELM_TOKEN} "
||
return
1
kubectl config set-context
${PROJECT_NAMESPACE}
-deploy
--cluster =ci_kube
--namespace =
${PROJECT_NAMESPACE}
--user =
${HELM_USER}
||
return
1
kubectl config use-context
${PROJECT_NAMESPACE}
-deploy
||
return
1
}
lint_template
(
)
{
info
"linting template"
dry_run
&&
return
helm lint
${CI_PROJECT_DIR}
/ helm-chart
/
${CI_PROJECT_NAME}
}
check_required_image_pull_environment
(
)
{
if
[
" ${CI_PROJECT_VISIBILITY} " ==
"public"
]
then
check_required_environment
"CI_REGISTRY CI_DEPLOY_USER CI_DEPLOY_PASSWORD"
||
return
1
fi
}
image_pull_settings
(
)
{
if
[
" ${CI_PROJECT_VISIBILITY} " ==
"public"
]
then
echo
""
else
echo
"--set registry.root= ${CI_REGISTRY} --set registry.secret.username= ${CI_DEPLOY_USER} --set registry.secret.password= ${CI_DEPLOY_PASSWORD} "
fi
}
deployment_name
(
)
{
if
[
-n
" ${DEPLOYMENT_NAME} "
]
then
echo
" ${DEPLOYMENT_NAME} "
else
echo
" ${CI_ENVIRONMENT_SLUG} - ${CI_PROJECT_NAME} "
fi
}
deploy_template
(
)
{
info
"deploying $(deployment_name) from template"
if dry_run
then
info
"helm upgrade --force --recreate-pods --debug --set image.repository= ${CI_REGISTRY_IMAGE} / ${CI_PROJECT_NAME} --set image.tag= ${CI_COMMIT_SHORT_SHA} --set environment= ${CI_ENVIRONMENT_NAME} --set-string git_commit= ${CI_COMMIT_SHORT_SHA} --set git_ref= ${CI_COMMIT_REF_SLUG} --set ci_job_id= ${CI_JOB_ID} $(environment_url_settings) $(image_pull_settings) $(project_specific_deploy_args) --wait --install $(deployment_name) ${CI_PROJECT_DIR} /helm-chart/ ${CI_PROJECT_NAME} "
else
helm upgrade
--force
--recreate-pods
--debug \
--set image.repository=
" ${CI_REGISTRY_IMAGE} / ${CI_PROJECT_NAME} " \
--set image.tag=
" ${CI_COMMIT_SHORT_SHA} " \
--set
environment =
" ${CI_ENVIRONMENT_NAME} " \
--set-string
git_commit =
" ${CI_COMMIT_SHORT_SHA} " \
--set
git_ref =
" ${CI_COMMIT_REF_SLUG} " \
--set
ci_job_id =
" ${CI_JOB_ID} " \
$
( image_pull_settings
) \
$
( project_specific_deploy_args
) \
--wait \
--install $
( deployment_name
)
${CI_PROJECT_DIR}
/ helm-chart
/
${CI_PROJECT_NAME}
fi
}
get_pods
(
)
{
kubectl get pods
-l
ci_job_id =
" ${CI_JOB_ID} "
}
watch_deployment
(
)
{
local
watch_deployment =$
( deployment_name
)
if
[
-n
" ${WATCH_DEPLOYMENT} "
]
then
watch_deployment =
" ${WATCH_DEPLOYMENT} "
fi
info
"waiting until deployment ${watch_deployment} is ready"
dry_run
&&
return
kubectl rollout status deployment
/
${watch_deployment}
-w
||
return
1
sleep
5
get_pods
||
return
1
# see what has been deployed
kubectl describe deployment
-l
app =
${CI_PROJECT_NAME} ,
environment =
${CI_ENVIRONMENT_NAME} ,
git_commit =
${CI_COMMIT_SHORT_SHA}
||
return
1
if
[
-n
" ${CI_ENVIRONMENT_URL} "
]
then
kubectl describe service
-l
app =
${CI_PROJECT_NAME} ,
environment =
${CI_ENVIRONMENT_NAME}
||
return
1
kubectl describe route
-l
app =
${CI_PROJECT_NAME} ,
environment =
${CI_ENVIRONMENT_NAME}
||
return
1
fi
}
run_main
(
)
{
check_required_environment
"CI_PROJECT_NAME CI_PROJECT_DIR CI_COMMIT_REF_SLUG CI_REGISTRY_IMAGE CI_ENVIRONMENT_NAME CI_JOB_ID CI_COMMIT_SHORT_SHA"
||
return
1
check_default_environment
"WATCH_DEPLOYMENT:CI_ENVIRONMENT_SLUG"
||
return
1
check_required_deploy_arg_environment
||
return
1
check_required_cluster_login_environment
||
return
1
check_required_image_pull_environment
||
return
1
cluster_login
if
[
$?
-gt
0
]
then
error
"could not login kubectl"
return
1
fi
init_helm_with_tiller
if
[
$?
-gt
0
]
then
error
"could not initialize helm"
return
1
fi
lint_template
if
[
$?
-gt
0
]
then
error
"linting failed"
return
1
fi
deploy_template
if
[
$?
-gt
0
]
then
error
"could not deploy template"
return
1
fi
watch_deployment
if
[
$?
-gt
0
]
then
error
"could not watch deployment"
return
1
fi
decommission_tiller
info
"ALL Complete!"
return
}
if
[
[
" ${BASH_SOURCE[0]} " ==
" ${0} "
]
]
then
run_main
if
[
$?
-gt
0
]
then
exit
1
fi
fi
翻译自: https://opensource.com/article/20/1/automating-helm-deployments-bash
bash 脚本部署lmnp