基于docker安装-高斯DB (opengauss 7.0.0-RC1)

官网部署文档:https://docs.opengauss.org/zh/docs/7.0.0-RC1/docs/InstallationGuide/容器镜像安装.html

获取镜像

docker pull registry.cn-hangzhou.aliyuncs.com/qiluo-images/opengauss:latest

启动并配置容器

docker run --name OpenGauss --privileged=true --restart=always -u root -p 15432:5432 -e GS_PASSWORD=QYuY482wasErOP1Q -v /etc/localtime:/etc/localtime -v /data/OpenGauss:/var/lib/opengauss registry.cn-hangzhou.aliyuncs.com/qiluo-images/opengauss:latest

账号密码如图所示
在这里插入图片描述
在这里插入图片描述

复制配置进行修改

docker cp OpenGauss :/var/lib/opengauss/data/postgresql.conf /data/postgresql.conf

复制到容器

docker cp /data/postgresql.conf OpenGauss:/var/lib/opengauss/data/postgresql.conf

docker-compose.yml 方式

version: '3.8'

services:
  opengauss:
    image: registry.cn-hangzhou.aliyuncs.com/qiluo-images/opengauss:latest
    container_name: OpenGauss
    environment:
      - GS_PASSWORD=QYuY482wasErOP1Q
    ports:
      - "15432:5432"
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/OpenGauss:/var/lib/opengauss
    restart: always
    privileged: true
    user: root

运行

docker-compose up -d

步骤 1:创建持久卷(Persistent Volume,PV)和持久卷声明(Persistent Volume Claim,PVC)
首先,我们需要创建一个持久卷(PV),用于存储 OpenGauss 的数据,并通过 PVC 将这个存储挂载到容器中。

持久卷(PV)和持久卷声明(PVC)配置:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: opengauss-pv
spec:
  capacity:
    storage: 10Gi  # 持久存储大小
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce  # 允许单一节点读取写入
  persistentVolumeReclaimPolicy: Retain  # 保留策略,删除PVC时保留数据
  hostPath:
    path: /data/OpenGauss  # 存储数据的主机路径
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: opengauss-pvc
spec:
  accessModes:
    - ReadWriteOnce  # 请求单节点的读写权限
  resources:
    requests:
      storage: 10Gi  # 请求存储大小

步骤 2:创建 Deployment
Deployment 定义了 OpenGauss 的容器,并且挂载了持久卷声明(PVC)来确保数据持久化。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: opengauss-deployment
spec:
  replicas: 1  # 部署单个副本
  selector:
    matchLabels:
      app: opengauss
  template:
    metadata:
      labels:
        app: opengauss
    spec:
      containers:
      - name: opengauss
        image: registry.cn-hangzhou.aliyuncs.com/qiluo-images/opengauss:latest  # 镜像
        ports:
        - containerPort: 5432  # 映射的端口
        env:
        - name: GS_PASSWORD
          value: "QYuY482wasErOP1Q"  # 环境变量
        volumeMounts:
        - mountPath: /var/lib/opengauss  # 持久化路径
          name: opengauss-storage  # 持久卷名称
      volumes:
      - name: opengauss-storage
        persistentVolumeClaim:
          claimName: opengauss-pvc  # 使用 PVC 挂载
      restartPolicy: Always  # 总是重启

步骤 3:创建 Service
Service 将 OpenGauss 容器暴露给集群内的其他服务,或者外部访问。

apiVersion: v1
kind: Service
metadata:
  name: opengauss-service
spec:
  selector:
    app: opengauss  # 选择与该标签匹配的 Pods
  ports:
    - protocol: TCP
      port: 5432  # 服务端口
      targetPort: 5432  # 容器端口
      nodePort: 30432  # 外部访问的端口
  type: NodePort  # 使用 NodePort 类型暴露服务

步骤 4:应用 Kubernetes 资源
将上述配置保存到一个名为 opengauss-k8s.yml 的文件中,然后执行以下命令来应用这些资源:

kubectl apply -f opengauss-k8s.yml

步骤 5:验证部署
可以使用以下命令检查是否成功部署:

kubectl get pods  # 查看 Pod 状态
kubectl get svc  # 查看服务状态
kubectl get pvc  # 查看 PVC 状态

Persistent Volume(PV):hostPath 存储数据在主机的 /data/OpenGauss 路径下。生产环境中,通常会使用云存储(如 AWS EBS、Azure Disk 等)。

Persistent Volume Claim(PVC):从 PV 请求存储空间。

Deployment:定义了 OpenGauss 容器,设置环境变量,挂载 PVC,并确保容器重启策略。

Service:提供容器的网络访问,可以选择暴露给集群内部或外部。

nodePort: 30432:指定了 NodePort,Kubernetes 会在每个节点的 30432 端口上暴露 OpenGauss 服务。

type: NodePort:将服务类型设置为 NodePort,这意味着 Kubernetes 会在每个集群节点的某个端口上暴露这个服务,允许集群外部访问。

使用 NodePort 访问:

现在,你可以通过任何集群节点的 IP:30432 来访问 OpenGauss 服务。
如果你使用的是云服务提供商(如 AWS、Azure 等),你还需要确保安全组或防火墙规则允许通过该端口访问。
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

k8s使用postgresql.conf

# -----------------------------------------------------------------------------
#
# postgresql_single.conf.sample
#      Configuration file for centralized environment
#
# Portions Copyright (c) 1996-2012, PostgreSQL Global Development Group
#
#
# IDENTIFICATION
#      src/common/backend/utils/misc/postgresql_single.conf.sample
#
#
# This file consists of lines of the form:
#
#   name = value
#
# (The "=" is optional.)  Whitespace may be used.  Comments are introduced with
# "#" anywhere on a line.  The complete list of parameter names and allowed
# values can be found in the openGauss documentation.
#
# The commented-out settings shown in this file represent the default values.
# Re-commenting a setting is NOT sufficient to revert it to the default value;
# you need to reload the server.
#
# This file is read on server startup and when the server receives a SIGHUP
# signal.  If you edit the file on a running system, you have to SIGHUP the
# server for the changes to take effect, or use "pg_ctl reload".  Some
# parameters, which are marked below, require a server shutdown and restart to
# take effect.
#
# Any parameter can also be given as a command-line option to the server, e.g.,
# "postgres -c log_connections=on".  Some parameters can be changed at run time
# with the "SET" SQL command.
#
# Memory units:  kB = kilobytes        Time units:  ms  = milliseconds
#                MB = megabytes                     s   = seconds
#                GB = gigabytes                     min = minutes
#                                                   h   = hours
#                                                   d   = days
# -----------------------------------------------------------------------------


#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------

# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.

#data_directory = 'ConfigDir'		# use data in another directory
					# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf'	# host-based authentication file
					# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf'	# ident configuration file
					# (change requires restart)

# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = ''			# write an extra PID file
					# (change requires restart)


#------------------------------------------------------------------------------
# CONNECTIONS AND AUTHENTICATION
#------------------------------------------------------------------------------

# - Connection Settings -

#listen_addresses = 'localhost'		# what IP address(es) to listen on;
					# comma-separated list of addresses;
					# defaults to 'localhost'; use '*' for all
					# (change requires restart)
#local_bind_address = '0.0.0.0'
#port = 5432				# (change requires restart)
max_connections = 200000			# (change requires restart)
# Note:  Increasing max_connections costs ~400 bytes of shared memory per
# connection slot, plus lock space (see max_locks_per_transaction).
#sysadmin_reserved_connections = 3	# (change requires restart)
#unix_socket_directory = ''		# (change requires restart)
#unix_socket_group = ''			# (change requires restart)
#unix_socket_permissions = 0700		# begin with 0 to use octal notation
					# (change requires restart)
#light_comm = off			# whether use light commnucation with nonblock mode or latch

# - Security and Authentication -

#authentication_timeout = 1min		# 1s-600s
session_timeout = 10min			# allowed duration of any unused session, 0s-86400s(1 day), 0 is disabled 
#idle_in_transaction_session_timeout = 0    # Sets the maximum allowed idle time between queries, when in a transaction, 0 is disabled
#ssl = off				# (change requires restart)
#ssl_ciphers = 'ALL'			# allowed SSL ciphers
					# (change requires restart)
#ssl_cert_notify_time = 90		# 7-180 days
#ssl_renegotiation_limit = 0		# amount of data between renegotiations, no longer supported
#ssl_cert_file = 'server.crt'		# (change requires restart)
#ssl_key_file = 'server.key'		# (change requires restart)
#ssl_ca_file = ''			# (change requires restart)
#ssl_crl_file = ''			# (change requires restart)

# Kerberos and GSSAPI
#krb_server_keyfile = ''
#krb_srvname = 'postgres'		# (Kerberos only)
#krb_caseins_users = off

#modify_initial_password = false	#Whether to change the initial password of the initial user
#password_policy = 1			#Whether password complexity checks
#password_reuse_time = 60		#Whether the new password can be reused in password_reuse_time days
#password_reuse_max = 0			#Whether the new password can be reused
#password_lock_time = 1			#The account will be unlocked automatically after a specified period of time
#failed_login_attempts = 10		#Enter the wrong password reached failed_login_attempts times, the current account will be locked
#password_encryption_type = 2		#Password storage type, 0 is md5 for PG, 1 is sha256 + md5, 2 is sha256 only
#password_min_length = 8		#The minimal password length(6-999)
#password_max_length = 32		#The maximal password length(6-999)
#password_min_uppercase = 0		#The minimal upper character number in password(0-999)
#password_min_lowercase = 0		#The minimal lower character number in password(0-999)
#password_min_digital = 0		#The minimal digital character number in password(0-999)
#password_min_special = 0		#The minimal special character number in password(0-999)
#password_effect_time = 90d		#The password effect time(0-999)
#password_notify_time = 7d		#The password notify time(0-999)

# - TCP Keepalives -
# see "man 7 tcp" for details

#tcp_keepalives_idle = 0		# TCP_KEEPIDLE, in seconds;
					# 0 selects the system default
#tcp_keepalives_interval = 0		# TCP_KEEPINTVL, in seconds;
					# 0 selects the system default
#tcp_keepalives_count = 0		# TCP_KEEPCNT;
					# 0 selects the system default

#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------

# - Memory -
#memorypool_enable = false
#memorypool_size = 512MB

#enable_memory_limit = true
#max_process_memory = 12GB
#UDFWorkerMemHardLimit = 1GB

#enable_huge_pages = off      # (change requires restart)
#huge_page_size = 0     # make sure huge_page_size is valid for os. 0 as default.
                    # (change requires restart)
shared_buffers = 1024MB			# min 128kB
					# (change requires restart)
bulk_write_ring_size = 2GB		# for bulkload, max shared_buffers
#standby_shared_buffers_fraction = 0.3 #control shared buffers use in standby, 0.1-1.0
#temp_buffers = 8MB			# min 800kB
max_prepared_transactions = 200		# zero disables the feature
					# (change requires restart)
# Note:  Increasing max_prepared_transactions costs ~600 bytes of shared memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
# It is not advisable to set max_prepared_transactions nonzero unless you
# actively intend to use prepared transactions.
#work_mem = 64MB				# min 64kB
#maintenance_work_mem = 16MB		# min 1MB
#max_stack_depth = 2MB			# min 100kB

cstore_buffers = 512MB         #min 16MB

# - Disk -

#temp_file_limit = -1			# limits per-session temp file space
					# in kB, or -1 for no limit

#sql_use_spacelimit = -1                # limits for single SQL used space on single DN
					# in kB, or -1 for no limit

# - Kernel Resource Usage -

#max_files_per_process = 1000		# min 25
					# (change requires restart)


#shared_preload_libraries = ''   # (change requires restart. if timescaledb is used, add $libdir/timescaledb)

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0ms		# 0-100 milliseconds
#vacuum_cost_page_hit = 1		# 0-10000 credits
#vacuum_cost_page_miss = 10		# 0-10000 credits
#vacuum_cost_page_dirty = 20		# 0-10000 credits
#vacuum_cost_limit = 200		# 1-10000 credits

# - Background Writer -

#bgwriter_delay = 10s			# 10-10000ms between rounds
#bgwriter_lru_maxpages = 100		# 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0		# 0-10.0 multipler on buffers scanned/round

# - Asynchronous Behavior -

#effective_io_concurrency = 1		# 1-1000; 0 disables prefetching


#------------------------------------------------------------------------------
# WRITE AHEAD LOG
#------------------------------------------------------------------------------

# - Settings -

wal_level = hot_standby			# minimal, archive, hot_standby or logical
					# (change requires restart)
#fsync = on				# turns forced synchronization on or off
#synchronous_commit = on		# synchronization level;
					# off, local, remote_receive, remote_write, or on
					# It's global control for all transactions
					# It could not be modified by gs_ctl reload, unless use setsyncmode.

#wal_sync_method = fsync		# the default is the first option
					# supported by the operating system:
					#   open_datasync
					#   fdatasync (default on Linux)
					#   fsync
					#   fsync_writethrough
					#   open_sync
#full_page_writes = on			# recover from partial page writes
#wal_buffers = 16MB			# min 32kB
					# (change requires restart)
#wal_writer_delay = 200ms		# 1-10000 milliseconds

#commit_delay = 0			# range 0-100000, in microseconds
#commit_siblings = 5			# range 1-1000

# - Checkpoints -

#checkpoint_segments = 64		# in logfile segments, min 1, 16MB each
#checkpoint_timeout = 15min		# range 30s-1h
#checkpoint_completion_target = 0.5	# checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 5min		# 0 disables
#checkpoint_wait_timeout = 60s  # maximum time wait checkpointer to start

enable_incremental_checkpoint = on	# enable incremental checkpoint
incremental_checkpoint_timeout = 60s	# range 1s-1h
#pagewriter_sleep = 100ms		# dirty page writer sleep time, 0ms - 1h
enable_double_write = on		# enable double write

# - Archiving -

#archive_mode = off		# allows archiving to be done
				# (change requires restart)
#archive_command = ''		# command to use to archive a logfile segment
				# placeholders: %p = path of file to archive
				#               %f = file name only
				# e.g. 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/archivedir/%f'
#archive_timeout = 0		# force a logfile segment switch after this
				# number of seconds; 0 disables
#archive_dest = ''		# path to use to archive a logfile segment

#------------------------------------------------------------------------------
# REPLICATION
#------------------------------------------------------------------------------

# - heartbeat -
#datanode_heartbeat_interval = 1s         # The heartbeat interval of the standby nodes.
				 # The value is best configured less than half of 
				 # the wal_receiver_timeout and wal_sender_timeout.

# - Sending Server(s) -

# Set these on the master and on any standby that will send replication data.

max_wal_senders = 4		# max number of walsender processes
				# (change requires restart)
wal_keep_segments = 16		# in logfile segments, 16MB each normal, 1GB each in share storage mode; 0 disables
#wal_sender_timeout = 6s	# in milliseconds; 0 disables
enable_slot_log = off
max_replication_slots = 8

#max_changes_in_memory = 4096
#max_cached_tuplebufs = 8192

#replconninfo1 = ''		# replication connection information used to connect primary on standby, or standby on primary,
						# or connect primary or standby on secondary
						# The heartbeat thread will not start if not set localheartbeatport and remoteheartbeatport.
						# e.g. 'localhost=10.145.130.2 localport=12211 localheartbeatport=12214 remotehost=10.145.130.3 remoteport=12212 remoteheartbeatport=12215, localhost=10.145.133.2 localport=12213 remotehost=10.145.133.3 remoteport=12214'
#replconninfo2 = ''		# replication connection information used to connect secondary on primary or standby,
						# or connect primary or standby on secondary
						# e.g. 'localhost=10.145.130.2 localport=12311 localheartbeatport=12214 remotehost=10.145.130.4 remoteport=12312 remoteheartbeatport=12215, localhost=10.145.133.2 localport=12313 remotehost=10.145.133.4 remoteport=12314'
#replconninfo3 = ''             # replication connection information used to connect primary on standby, or standby on primary,
                                                # e.g. 'localhost=10.145.130.2 localport=12311 localheartbeatport=12214 remotehost=10.145.130.5 remoteport=12312 remoteheartbeatport=12215, localhost=10.145.133.2 localport=12313 remotehost=10.145.133.5 remoteport=12314'
#replconninfo4 = ''             # replication connection information used to connect primary on standby, or standby on primary,
                                                # e.g. 'localhost=10.145.130.2 localport=12311 localheartbeatport=12214 remotehost=10.145.130.6 remoteport=12312 remoteheartbeatport=12215, localhost=10.145.133.2 localport=12313 remotehost=10.145.133.6 remoteport=12314'
#replconninfo5 = ''             # replication connection information used to connect primary on standby, or standby on primary,
                                                # e.g. 'localhost=10.145.130.2 localport=12311 localheartbeatport=12214 remotehost=10.145.130.7 remoteport=12312 remoteheartbeatport=12215, localhost=10.145.133.2 localport=12313 remotehost=10.145.133.7 remoteport=12314'
#replconninfo6 = ''             # replication connection information used to connect primary on standby, or standby on primary,
                                                # e.g. 'localhost=10.145.130.2 localport=12311 localheartbeatport=12214 remotehost=10.145.130.8 remoteport=12312 remoteheartbeatport=12215, localhost=10.145.133.2 localport=12313 remotehost=10.145.133.8 remoteport=12314'
#replconninfo7 = ''             # replication connection information used to connect primary on standby, or standby on primary,
                                                # e.g. 'localhost=10.145.130.2 localport=12311 localheartbeatport=12214 remotehost=10.145.130.9 remoteport=12312 remoteheartbeatport=12215, localhost=10.145.133.2 localport=12313 remotehost=10.145.133.9 remoteport=12314'
#cross_cluster_replconninfo1 = ''             # replication connection information used to connect primary on primary cluster, or standby on standby cluster,
                                                # e.g. 'localhost=10.145.133.2 localport=12313 remotehost=10.145.133.9 remoteport=12314'
#cross_cluster_replconninfo2 = ''             # replication connection information used to connect primary on primary cluster, or standby on standby cluster,
                                                # e.g. 'localhost=10.145.133.2 localport=12313 remotehost=10.145.133.9 remoteport=12314'
#cross_cluster_replconninfo3 = ''             # replication connection information used to connect primary on primary cluster, or standby on standby cluster,
                                                # e.g. 'localhost=10.145.133.2 localport=12313 remotehost=10.145.133.9 remoteport=12314'
#cross_cluster_replconninfo4 = ''             # replication connection information used to connect primary on primary cluster, or standby on standby cluster,
                                                # e.g. 'localhost=10.145.133.2 localport=12313 remotehost=10.145.133.9 remoteport=12314'
#cross_cluster_replconninfo5 = ''             # replication connection information used to connect primary on primary cluster, or standby on standby cluster,
                                                # e.g. 'localhost=10.145.133.2 localport=12313 remotehost=10.145.133.9 remoteport=12314'
#cross_cluster_replconninfo6 = ''             # replication connection information used to connect primary on primary cluster, or standby on standby cluster,
                                                # e.g. 'localhost=10.145.133.2 localport=12313 remotehost=10.145.133.9 remoteport=12314'
#cross_cluster_replconninfo7 = ''             # replication connection information used to connect primary on primary cluster, or standby on standby cluster,
                                                # e.g. 'localhost=10.145.133.2 localport=12313 remotehost=10.145.133.9 remoteport=12314'

# - Master Server -

# These settings are ignored on a standby server.

synchronous_standby_names = '*'	# standby servers that provide sync rep
				# comma-separated list of application_name
				# from standby(s); '*' = all
#most_available_sync = off	# Whether master is allowed to continue
				# as standbalone after sync standby failure
				# It's global control for all transactions
#vacuum_defer_cleanup_age = 0	# number of xacts by which cleanup is delayed
#data_replicate_buffer_size = 16MB	# data replication buffer size
walsender_max_send_size = 8MB  # Size of walsender max send size
#enable_data_replicate = on

# - Standby Servers -

# These settings are ignored on a master server.

hot_standby = on			# "on" allows queries during recovery
					# (change requires restart)
#max_standby_archive_delay = 30s	# max delay before canceling queries
					# when reading WAL from archive;
					# -1 allows indefinite delay
#max_standby_streaming_delay = 30s	# max delay before canceling queries
					# when reading streaming WAL;
					# -1 allows indefinite delay
#wal_receiver_status_interval = 5s	# send replies at least this often
					# 0 disables
#hot_standby_feedback = off		# send info from standby to prevent
					# query conflicts
#wal_receiver_timeout = 6s		# time that receiver waits for
					# communication from master
					# in milliseconds; 0 disables
#wal_receiver_connect_timeout = 1s	# timeout that receiver connect master
							# in seconds; 0 disables
#wal_receiver_connect_retries = 1	# max retries that receiver connect master
#wal_receiver_buffer_size = 64MB	# wal receiver buffer size
#enable_xlog_prune = on # xlog keep for all standbys even through they are not connecting and donnot created replslot.
#max_size_for_xlog_prune = 2147483647  # xlog keep for the wal size less than max_xlog_size when the enable_xlog_prune is on
#max_logical_replication_workers = 4   # Maximum number of logical replication worker processes.
#max_sync_workers_per_subscription = 2   # Maximum number of table synchronization workers per subscription.
#max_size_xlog_force_prune = 0         # xlog size to be force recycled when the majority is satisfied, regardless of whether
                                       # the standby is connected or not, and whether there are residual replication slots

#------------------------------------------------------------------------------
# QUERY TUNING
#------------------------------------------------------------------------------

# - Planner Method Configuration -

#enable_bitmapscan = on
#enable_hashagg = on
#enable_sortgroup_agg = off
#enable_hashjoin = on
#enable_indexscan = on
#enable_indexonlyscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
enable_kill_query = off			# optional: [on, off], default: off
# - Planner Cost Constants -

#seq_page_cost = 1.0			# measured on an arbitrary scale
#random_page_cost = 4.0			# same scale as above
#cpu_tuple_cost = 0.01			# same scale as above
#cpu_index_tuple_cost = 0.005		# same scale as above
#cpu_operator_cost = 0.0025		# same scale as above
#effective_cache_size = 128MB
#var_eq_const_selectivity = off

# - Genetic Query Optimizer -

#geqo = on
#geqo_threshold = 12
#geqo_effort = 5			# range 1-10
#geqo_pool_size = 0			# selects default based on effort
#geqo_generations = 0			# selects default based on effort
#geqo_selection_bias = 2.0		# range 1.5-2.0
#geqo_seed = 0.0			# range 0.0-1.0

# - Other Planner Options -

#default_statistics_target = 100	# range 1-10000
#constraint_exclusion = partition	# on, off, or partition
#cursor_tuple_fraction = 0.1		# range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8		# 1 disables collapsing of explicit
					# JOIN clauses
#plan_mode_seed = 0         # range -1-0x7fffffff
#check_implicit_conversions = off
#enable_expr_fusion = off
#enable_functional_dependency = off
#enable_indexscan_optimization = off
#enable_inner_unique_opt = off

#------------------------------------------------------------------------------
# ERROR REPORTING AND LOGGING
#------------------------------------------------------------------------------

# - Where to Log -

#log_destination = 'stderr'		# Valid values are combinations of
					# stderr, csvlog, syslog, and eventlog,
					# depending on platform.  csvlog
					# requires logging_collector to be on.

# This is used when logging to stderr:
logging_collector = on   		# Enable capturing of stderr and csvlog
					# into log files. Required to be on for
					# csvlogs.
					# (change requires restart)

# These are only used if logging_collector is on:
#log_directory = 'pg_log'		# directory where log files are written,
					# can be absolute or relative to PGDATA
log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'	# log file name pattern,
					# can include strftime() escapes
log_file_mode = 0600			# creation mode for log files,
					# begin with 0 to use octal notation
#log_truncate_on_rotation = off		# If on, an existing log file with the
					# same name as the new log file will be
					# truncated rather than appended to.
					# But such truncation only occurs on
					# time-driven rotation, not on restarts
					# or size-driven rotation.  Default is
					# off, meaning append to existing files
					# in all cases.
#log_rotation_age = 1d			# Automatic rotation of logfiles will
					# happen after that time.  0 disables.
log_rotation_size = 20MB		# Automatic rotation of logfiles will
					# happen after that much log output.
					# 0 disables.

# These are relevant when logging to syslog:
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'

# This is only relevant when logging to eventlog (win32):
#event_source = 'PostgreSQL'

# - When to Log -

#log_min_messages = warning		# values in order of decreasing detail:
					#   debug5
					#   debug4
					#   debug3
					#   debug2
					#   debug1
					#   info
					#   notice
					#   warning
					#   error
					#   log
					#   fatal
					#   panic

#log_min_error_statement = error	# values in order of decreasing detail:
				 	#   debug5
					#   debug4
					#   debug3
					#   debug2
					#   debug1
				 	#   info
					#   notice
					#   warning
					#   error
					#   log
					#   fatal
					#   panic (effectively off)

log_min_duration_statement = 1800000	# -1 is disabled, 0 logs all statements
					# and their durations, > 0 logs only
					# statements running at least this number
					# of milliseconds


# - What to Log -

#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_pagewriter = off
log_connections = off			# log connection requirement from client
log_disconnections = off		# log disconnection from client
log_duration = off			# log the execution time of each query
					# when log_duration is on and log_min_duration_statement
					# is larger than zero, log the ones whose execution time
					# is larger than this threshold
#log_error_verbosity = default		# terse, default, or verbose messages
log_hostname = off			# log hostname
log_line_prefix = '%m %u %d %h %p %S '	# special values:
					#   %a = application name
					#   %u = user name
					#   %d = database name
					#   %r = remote host and port
					#   %h = remote host
					#   %p = process ID
					#   %t = timestamp without milliseconds
					#   %m = timestamp with milliseconds
					#   %n = DataNode name
					#   %i = command tag
					#   %e = SQL state
					#   %c = logic thread ID
					#   %l = session line number
					#   %s = session start timestamp
					#   %v = virtual transaction ID
					#   %x = transaction ID (0 if none)
					#   %q = stop here in non-session
					#        processes
					#   %S = session ID
					#   %% = '%'
					# e.g. '<%u%%%d> '
#log_lock_waits = off			# log lock waits >= deadlock_timeout
#log_statement = 'none'			# none, ddl, mod, all
#log_temp_files = -1			# log temporary files equal or larger
					# than the specified size in kilobytes;
					# -1 disables, 0 logs all temp files
log_timezone = 'PRC'

#------------------------------------------------------------------------------
# ALARM
#------------------------------------------------------------------------------
enable_alarm = on
connection_alarm_rate = 0.9
alarm_report_interval = 10
alarm_component = '/opt/snas/bin/snas_cm_cmd'

#------------------------------------------------------------------------------
# RUNTIME STATISTICS
#------------------------------------------------------------------------------

# - Query/Index Statistics Collector -

#track_activities = on
#track_counts = on
#track_io_timing = off
#track_functions = none			# none, pl, all
#track_activity_query_size = 1024 	# (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
#track_thread_wait_status_interval = 30min # 0 to disable
#track_sql_count = off
#enbale_instr_track_wait = on

# - Statistics Monitoring -

#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off

#------------------------------------------------------------------------------
# WORKLOAD MANAGER
#------------------------------------------------------------------------------

use_workload_manager = on		# Enables workload manager in the system.
					# (change requires restart)
#------------------------------------------------------------------------------
# SECURITY POLICY
#------------------------------------------------------------------------------
#enable_security_policy = off
#use_elastic_search = off
#elastic_search_ip_addr = 'https://127.0.0.1' # what elastic search ip is, change https to http when elastic search is non-ssl mode


#cpu_collect_timer = 30

#------------------------------------------------------------------------------
# AUTOVACUUM PARAMETERS
#------------------------------------------------------------------------------

#autovacuum = off			# Enable autovacuum subprocess?  default value is 'on'
					# requires track_counts to also be on.
#log_autovacuum_min_duration = -1	# -1 disables, 0 logs all actions and
					# their durations, > 0 logs only
					# actions running at least this number
					# of milliseconds.
#autovacuum_max_workers = 3		# max number of autovacuum subprocesses
					# (change requires restart)
#autovacuum_naptime = 1min		# time between autovacuum runs
#autovacuum_vacuum_threshold = 50	# min number of row updates before
					# vacuum
#autovacuum_analyze_threshold = 50	# min number of row updates before
					# analyze
#autovacuum_vacuum_scale_factor = 0.2	# fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1	# fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000	# maximum XID age before forced vacuum
					# (change requires restart)
#autovacuum_vacuum_cost_delay = 20ms	# default vacuum cost delay for
					# autovacuum, in milliseconds;
					# -1 means use vacuum_cost_delay
#autovacuum_vacuum_cost_limit = -1	# default vacuum cost limit for
					# autovacuum, -1 means use
					# vacuum_cost_limit

#------------------------------------------------------------------------------
# AI-based Optimizer
#------------------------------------------------------------------------------
# enable_ai_stats = on           # Enable AI Ext Statistics? default value is 'on'

#------------------------------------------------------------------------------
# CLIENT CONNECTION DEFAULTS
#------------------------------------------------------------------------------

# - Statement Behavior -
#client_min_messages = notice      # values in order of decreasing detail:
                   #   debug5
                   #   debug4
                   #   debug3
                   #   debug2
                   #   debug1
                   #   log
                   #   notice
                   #   warning
                   #   error
#search_path = '"$user",public'		# schema names
#default_tablespace = ''		# a tablespace name, '' uses the default
#temp_tablespaces = ''			# a list of tablespace names, '' uses
					# only default tablespace
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0			# in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#bytea_output = 'hex'			# hex, escape
#block_encryption_mode = 'aes-128-cbc'     #  values in order of decreasing detail:
    #  aes-128-cbc
    #  aes-192-cbc
    #  aes-256-cbc
    #  aes-128-cfb1
    #  aes-192-cfb1
    #  aes-256-cfb1
    #  aes-128-cfb8
    #  aes-192-cfb8
    #  aes-256-cfb8
    #  aes-128-cfb128
    #  aes-192-cfb128
    #  aes-256-cfb128
    #  aes-128-ofb
    #  aes-192-ofb
    #  aes-256-ofb
#xmlbinary = 'base64'
#xmloption = 'content'
#max_compile_functions = 1000
#gin_pending_list_limit = 4MB
#group_concat_max_len=1024
# - Locale and Formatting -

datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
timezone = 'PRC'
#timezone_abbreviations = 'Default'     # Select the set of available time zone
					# abbreviations.  Currently, there are
					#   Default
					#   Australia
					#   India
					# You can create your own file in
					# share/timezonesets/.
#extra_float_digits = 0			# min -15, max 3
#client_encoding = sql_ascii		# actually, defaults to database
					# encoding

# These settings are initialized by initdb, but they can be changed.
lc_messages = 'C'			# locale for system error message
					# strings
lc_monetary = 'C'			# locale for monetary formatting
lc_numeric = 'C'			# locale for number formatting
lc_time = 'C'				# locale for time formatting

# default configuration for text search
default_text_search_config = 'pg_catalog.english'

# - Other Defaults -

#dynamic_library_path = '$libdir'
#local_preload_libraries = ''

#------------------------------------------------------------------------------
# LOCK MANAGEMENT
#------------------------------------------------------------------------------

#deadlock_timeout = 1s
lockwait_timeout = 1200s		# Max of lockwait_timeout and deadlock_timeout + 1s
#max_locks_per_transaction = 256		# min 10
					# (change requires restart)
# Note:  Each lock table slot uses ~270 bytes of shared memory, and there are
# max_locks_per_transaction * (max_connections + max_prepared_transactions)
# lock table slots.
#max_pred_locks_per_transaction = 64	# min 10
					# (change requires restart)

#------------------------------------------------------------------------------
# VERSION/PLATFORM COMPATIBILITY
#------------------------------------------------------------------------------

# - Previous openGauss Versions -

#array_nulls = on
#backslash_quote = safe_encoding	# on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on

# - Other Platforms and Clients -

#transform_null_equals = off

##------------------------------------------------------------------------------
# ERROR HANDLING
#------------------------------------------------------------------------------

#exit_on_error = off			# terminate session on any error?
#restart_after_crash = on		# reinitialize after backend crash?
#omit_encoding_error = off		# omit untranslatable character error
#data_sync_retry = off			# retry or panic on failure to fsync data?

#------------------------------------------------------------------------------
# DATA NODES AND CONNECTION POOLING
#------------------------------------------------------------------------------
#cache_connection = on          # pooler cache connection

#------------------------------------------------------------------------------
# GTM CONNECTION
#------------------------------------------------------------------------------

pgxc_node_name = 'gaussdb'			# Coordinator or Datanode name
					# (change requires restart)

##------------------------------------------------------------------------------
# OTHER PG-XC OPTIONS
#------------------------------------------------------------------------------
#enforce_two_phase_commit = on		# Enforce the usage of two-phase commit on transactions
					# where temporary objects are used or ON COMMIT actions
					# are pending.
					# Usage of commit instead of two-phase commit may break
					# data consistency so use at your own risk.

#------------------------------------------------------------------------------
# AUDIT
#------------------------------------------------------------------------------

audit_enabled = on
#audit_directory = 'pg_audit'
#audit_data_format = 'binary'
#audit_rotation_interval = 1d
#audit_rotation_size = 10MB
#audit_space_limit = 1024MB
#audit_file_remain_threshold = 1048576
#audit_login_logout = 7
#audit_database_process = 1
#audit_user_locked = 1
#audit_user_violation = 0
#audit_grant_revoke = 1
#audit_system_object = 12295
#audit_dml_state = 0
#audit_dml_state_select = 0
#audit_function_exec = 0
#audit_copy_exec = 0
#audit_set_parameter = 1		# whether audit set parameter operation
#audit_xid_info = 0 			# whether record xid info in audit log
#audit_thread_num = 1
#no_audit_client = ""
#full_audit_users = ""
#audit_system_function_exec = 0

#Choose which style to print the explain info, normal,pretty,summary,run
#explain_perf_mode = normal
#------------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#------------------------------------------------------------------------------

# Add settings for extensions here

# ENABLE DATABASE PRIVILEGES SEPARATE
#------------------------------------------------------------------------------
#enableSeparationOfDuty = off
#------------------------------------------------------------------------------


#enable_fast_allocate = off
#prefetch_quantity = 32MB
#backwrite_quantity = 8MB
#cstore_prefetch_quantity = 32768		#unit kb
#cstore_backwrite_quantity = 8192		#unit kb
#cstore_backwrite_max_threshold =  2097152		#unit kb
#fast_extend_file_size = 8192		#unit kb

#------------------------------------------------------------------------------
# LLVM
#------------------------------------------------------------------------------
#enable_codegen = off			# consider use LLVM optimization
#enable_codegen_print = off		# dump the IR function
#codegen_cost_threshold = 10000		# the threshold to allow use LLVM Optimization

#------------------------------------------------------------------------------
# JOB SCHEDULER OPTIONS
#------------------------------------------------------------------------------
job_queue_processes = 10        # Number of concurrent jobs, optional: [0..1000], default: 10.

#------------------------------------------------------------------------------
# DCF OPTIONS
#------------------------------------------------------------------------------
#enable_dcf = off
#
#------------------------------------------------------------------------------
# PLSQL COMPILE OPTIONS
#------------------------------------------------------------------------------
#plsql_show_all_error=off
#enable_seqscan_fusion = off
#enable_cachedplan_mgr=on
#enable_ignore_case_in_dquotes=off

#------------------------------------------------------------------------------
# SHARED STORAGE OPTIONS
#------------------------------------------------------------------------------
#ss_enable_dms = off
#ss_enable_dss = off
#ss_enable_ssl = on
#ss_enable_aio = on
#ss_enable_catalog_centralized = on
#ss_enable_dynamic_trace = on
#ss_enable_reform_trace = on
#ss_instance_id = 0
#ss_dss_data_vg_name = ''
#ss_dss_xlog_vg_name = ''
#ss_dss_conn_path = ''
#ss_interconnect_channel_count = 16
#ss_work_thread_count = 32
#ss_fi_packet_loss_prob = 10;
#ss_fi_net_latency_ms = 10
#ss_fi_cpu_latency_ms = 10
#ss_fi_process_fault_prob = 10
#ss_fi_custom_fault_param = 3000
#ss_recv_msg_pool_size = 16MB
#ss_interconnect_type = 'TCP'
#ss_interconnect_url = '0:127.0.0.1:1611'
#ss_rdma_work_config = ''
#ss_ock_log_path = ''
#ss_enable_scrlock = off
#ss_enable_scrlock_sleep_mode = on
#ss_scrlock_server_port = 8000
#ss_scrlock_worker_count = 2
#ss_scrlock_worker_bind_core = ''
#ss_scrlock_server_bind_core = ''
#ss_log_level = 7
#ss_log_backup_file_count = 10
#ss_log_max_file_size = 10MB
#ss_parallel_thread_count = 16
#ss_enable_ondemand_recovery = off
#ss_ondemand_recovery_mem_size = 4GB     # min: 1GB, max: 100GB
#ss_enable_ondemand_realtime_build = off
#ss_enable_dorado = off
#ss_stream_cluster = off
#enable_segment = off
#ss_work_thread_pool_attr = ''
#ss_fi_packet_loss_entries = ''
#ss_fi_net_latency_entries = ''
#ss_fi_cpu_latency_entries = ''
#ss_fi_process_fault_entries = ''
#ss_fi_custom_fault_entries = ''


#------------------------------------------------------------------------------
# DOLPHIN OPTIONS
#------------------------------------------------------------------------------
dolphin.nulls_minimal_policy = on # the inverse of the default configuration value ! do not change !

#------------------------------------------------------------------------------
# UWAL OPTIONS
#------------------------------------------------------------------------------
#enable_uwal = off
#uwal_disk_size = 8589934592
#uwal_devices_path = 'uwal_device_file'
#uwal_log_path = 'uwal_log'
#uwal_rpc_compression_switch = false
#uwal_rpc_flowcontrol_switch = false
#uwal_rpc_flowcontrol_value = 128
#uwal_config = '{"uwal_nodeid": 0, "uwal_ip": "127.0.0.1", "uwal_port": 9991, "uwal_protocol": "tcp", "cpu_bind_switch": "true", "cpu_bind_start": 1, "cpu_bind_num": 3}'

# These settings must be set when enable_uwal is on by standby node, 
# and it be add in ANY params in synchronous_standby_names by primary node.

#application_name = 'dn_master'
#enable_nls = off
#wal_file_preinit_threshold = 100 # Threshold for pre-initializing xlogs, in percentages.

# use default port 5432
wal_level = logical
password_encryption_type = 1
listen_addresses = '0.0.0.0'

修改连接数大小,postgresql.conf配置文件如上

kubectl create configmap opengauss-config --from-file=postgresql.conf=/data/k8s-yaml/opengauss/postgresql.conf

重启 Pod
如果容器已经在运行,你可以选择重新启动 Pod 以应用新的配置:

kubectl rollout restart deployment opengauss-deployment

这样,postgresql.conf 文件就会被正确挂载到容器中了,OpenGauss 应该能够在启动时使用新的配置。

然后修改密码

ALTER USER gaussdb IDENTIFIED BY 'QYuY482wasErOP1Q' REPLACE 'QYuY482wasErOP1Q';

实现java项目链接,下载驱动https://opengauss.org/zh/download/ 然后上传到maven私库

![在这里插入图片描述](https://i-blog.csdnimg.cn/direct/a1ce43fd599a4aaaa06ef92a8ab41251.png

上传到你本地私库,如果你使用pg连接驱动也是可以的
在这里插入图片描述

 <!-- openGauss 驱动 -->
<dependency>
    <groupId>org.opengauss</groupId>
    <artifactId>opengauss-jdbc</artifactId>
    <version>7.0.0-RC1</version>
</dependency>

maven的是无法找到org.opengauss.Driver

spring:
  datasource:
    # 数据源类型为 HikariDataSource
    type: com.zaxxer.hikari.HikariDataSource
    # 数据库驱动
    driverClassName: org.opengauss.Driver
    # 数据库用户名
    username: ENC(4vW8vh3lJ0PdxhF8AIUlKoU07FiHxBpd)
    # 数据库密码
    password: ENC(VEgTfZAG0gSiL8HKsiU0PIfh+R+qC2EmWZO2tbbGgVI=)
    # 数据库连接URL
    url: jdbc:opengauss://gauss.cqdx.com:31432/dynamic_v3_last?useCursorFetch=true&characterEncoding=utf8&ssl=false&serverTimezone=Asia/Shanghai&allowMultiQueries=true
    # Hikari 数据源连接池配置
    hikari:
      # 空闲连接超时(单位毫秒)
      idleTimeout: 60000 # 空闲连接最大存活时间(类似于 Druid 的 time-between-eviction-runs-millis)
      # 最大连接超时(单位毫秒)
      connectionTimeout: 5000  # 获取连接的最大等待时间,类似于 Druid 的 max-wait
      # 最大生命周期,0表示没有最大生命周期限制(单位毫秒)
      maxLifetime: 1800000  # 默认30分钟,类似于 Druid 的 min-evictable-idle-time-millis
      # 最小空闲连接数
      minimumIdle: 50  # 最小空闲连接数量(Druid 中的 min-idle)
      # 最大连接池大小
      maximumPoolSize: 200  # 最大连接数(Druid 中的 max-active)
      # 是否自动提交事务
      autoCommit: true
      # 连接池准备语句的缓存设置
      cachePrepStmts: true
      # 准备语句缓存大小
      prepStmtCacheSize: 2
      # 单条SQL语句缓存大小
      prepStmtCacheSqlLimit: 2048
      # 是否启用服务器端准备的语句(针对新版PostgreSQL)
      useServerPrepStmts: true
      # 是否测试连接
      connectionTestQuery: SELECT 1
      # 连接池的日志记录
      poolName: HikariPool-1
      # 其他可选配置项
      isAutoCommit: true
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

❀͜͡傀儡师

你的鼓励将是我创作的最大动力!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值