实验环境 Openshift4.11 是网络隔离环境,因此会有一些下载images的步骤;如果你的环境联网的,自动忽略相关步骤;
pgo 方式部署 pgadmin4
如果Openshift 中已经部署pgo,则在创建postgresql集群后,仍可通过pgo命令部署pgadmin;实验环境中pgo 等镜像版本为ubi8-4.7.7。
-
下载镜像
podman pull registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:ubi8-4.7.7
podman pull registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-13.8-4.7.7podman save registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:ubi8-4.7.7 > crunchy-postgres-exporter_ubi8-4.7.7.tar
podman save registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-13.8-4.7.7 > crunchy-pgadmin4_ubi8-13.8-4.7.7.tarpodman tag registry.developers.crunchydata.com/crunchydata/crunchy-postgres-exporter:ubi8-4.7.7 registry.myopenshift.com:5000/crunchydata/crunchy-postgres-exporter:ubi8-4.7.7
podman tag registry.developers.crunchydata.com/crunchydata/crunchy-pgadmin4:ubi8-13.8-4.7.7 registry.myopenshift.com:5000/crunchydata/crunchy-pgadmin4:ubi8-13.8-4.7.7podman push registry.myopenshift.com:5000/crunchydata/crunchy-postgres-exporter:ubi8-4.7.7
podman push registry.myopenshift.com:5000/crunchydata/crunchy-pgadmin4:ubi8-13.8-4.7.7
-
部署 pgadmin
pgo create cluster hippo --username=hippo --password=datalake
pgo create pgadmin hippopgo create pgadmin hippo --storage-config=gce --pvc-size=1G
自动创建pod hippo-pgadmin-xxxx,同时创建svc hippo-pgadmin;
登陆pgadmin
kubectl port-forward svc/hippo-pgadmin 5050:5050
# 配置好后,使用 http://localhost:5050 登录pgadmin4没问题,但是加pg server有问题,怀疑image有bug,等待以后新版本;
pgo create pgadmin 后,登录pgadmin4 web页面,连接 server 报错:
psycopg2.extensions.Column object has no attribute'_asdict'
解决方式(未验证):
workon venv
pip uninstall psycopg2 -y
pip install psycopg2==2.7.7
# 怀疑这是个bug,看看以后新版本;
最终,通过单独部署pgadmin实现;使用helm chart部署成功;
helm 部署 pgadmin4
helm 来源
helm 使用项目中部分代码,解压解压:pgadmin4-1.17.0.tgz
[root@inftjv2 helm]# tree pgadmin4/
pgadmin4/
├── Chart.yaml
├── examples
│ ├── add-oauth2-config.yaml
│ ├── add-pgpass-file.yaml
│ ├── enable-ldap-integration.yaml
│ ├── ingress-with-outside-tls-termination.yaml
│ └── set-admin-creds.yaml
├── README.md
├── templates
│ ├── auth-secret.yaml
│ ├── deployment.yaml
│ ├── extra-list.yaml
│ ├── _helpers.tpl
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── networkpolicy.yaml
│ ├── NOTES.txt
│ ├── pvc.yaml
│ ├── server-definitions-configmap.yaml
│ ├── server-definitions-secret.yaml
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml3 directories, 22 files
[root@inftjv2 helm]#
修改values.yaml
虽然文件看着较多,实际需要仅修改镜像地址;
...
image:
registry: registry.myopenshift.com:5000 # 修改为私有镜像仓库
repository: dpage/pgadmin4
...
准备dpage/pgadmin4镜像
podman save dpage/pgadmin4 > pgadmin4_7.5.tar.gz
podman load -i < pgadmin4_7.5.tar.gz
podman tag cc-artifactory-docker-mirror.myopenshift.com:9443/external-docker-io/dpage/pgadmin4:7.5 registry.myopenshift.com:5000/dpage/pgadmin4:7.5
podman push registry.myopenshift.com:5000/dpage/pgadmin4:7.5
helm 安装 pgadmin4
helm install pgadmin4 ./pgadmin4/
[root@inftjv2 helm]# helm install pgadmin4 ./pgadmin4/
I1106 16:14:06.277415 510800 request.go:621] Throttling request took 1.179032821s, request: GET:https://api.sandbox.myopenshift.com:6443/apis/project.openshift.io/v1?timeout=32s
NAME: pgadmin4
LAST DEPLOYED: Mon Nov 6 16:14:11 2023
NAMESPACE: opsns
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace opsns -l "app.kubernetes.io/name=pgadmin4,app.kubernetes.io/instance=pgadmin4" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
[root@inftjv2 helm]#
创建路由
cat << oc create -f -
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: pgadmin4
namespace: opsns
spec:
host: pgadmin4-opsns.apps.sandbox.myopenshift.com
to:
kind: Service
name: pgadmin4
weight: 100
port:
targetPort: http
wildcardPolicy: None
EOF
登录pgadmin
http://pgadmin4-opsns.apps.sandbox.myopenshift.com
账号密码:chart@domain.com/SuperSecret
登录后,servers --> register --> servrer:
tab: connection
host name/address: // 如果 svc type是cluster IP,则使用IP;如果 svc type是nodeport,则使用node name
port: // 如果 svc type是cluster IP, 则默认5432;如果 svc type是nodeport,则使用映射后port
maintenance database: //需要连接的DB name
username:
password: