第十章 KubeSphere3.3.0 + FastDFS6.0.8 单节点部署

5 篇文章 1 订阅
1 篇文章 0 订阅

第十章 KubeSphere3.3.0 + FastDFS6.0.8 单节点部署



前言

FastDFS是一款开源的分布式文件系统,功能主要包括:文件存储、文件同步、文件访问(文件上传、文件下载)等,解决了文件大容量存储和高性能访问的问题。FastDFS特别适合以文件为载体的在线服务,如图片、视频、文档等等。本文使用KubeSphere3.3.0 + FastDFS6.0.8部署


一、获取固定域名

1、创建一个空的服务,直接创建

镜像获取地址:wangzhen01/fastdfs:v6.0.8

在这里插入图片描述

2、通过终端获取域名

在这里插入图片描述

ping DNS 名称:

在这里插入图片描述

得到域名地址:
fastdfs-v1-0.fastdfs.jianzhubao.svc.cluster.local
在此可以多准备几个域名地址,做集群使用
fastdfs-v1-1.fastdfs.jianzhubao.svc.cluster.local
fastdfs-v1-2.fastdfs.jianzhubao.svc.cluster.local

获取到域名后,就可以把服务删除,准备重新搭建


二、创建配置文件

配置文件有多个:
storage.conf
tracker.conf
mod_fastdfs.conf
client.conf

提示:作者未做集群部署,使用的单节点部署

2.1、storage.conf 配置

  • storage.conf 主要修改以下地方,不集群只需要配置一个tracker_server:

tracker_server=fastdfs-v1-0.fastdfs.jianzhubao.svc.cluster.local:22122
tracker_server=fastdfs-v1-1.fastdfs.jianzhubao.svc.cluster.local:22122
tracker_server=fastdfs-v1-2.fastdfs.jianzhubao.svc.cluster.local:22122


# is this config file disabled
# false for enabled
# true for disabled
disabled=false

# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configed correctly.
group_name=group1

# bind an address of this host
# empty for bind all addresses of this host
bind_addr=

# if bind an address of this host when connect to other servers 
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true

# the storage server port
port=23000

# connect timeout in seconds
# default value is 30s
connect_timeout=10

# network timeout in seconds
# default value is 30s
network_timeout=60

# heart beat interval in seconds
heart_beat_interval=30

# disk usage report interval in seconds
stat_report_interval=60

# the base path to store data and log files
base_path=/home/dfs

# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
# you should set this parameter larger, eg. 10240
max_connections=1024

# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
buff_size = 256KB

# accept thread count
# default value is 1
# since V4.07
accept_threads=1

# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
work_threads=4

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true

# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1

# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500

# path(disk or mount point) count, default value is 1
store_path_count=1

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/home/dfs
#store_path1=/home/dfs2

# subdir_count  * subdir_count directories will be auto created under each 
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
tracker_server=fastdfs-v1-0.fastdfs.jianzhubao.svc.cluster.local:22122
#tracker_server=fastdfs-v1-1.fastdfs.jianzhubao.svc.cluster.local:22122
#tracker_server=fastdfs-v1-2.fastdfs.jianzhubao.svc.cluster.local:22122


#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts=*

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0

# valid when file_distribute_to_path is set to 0 (round robin), 
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=10

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=

# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0

# you can use "#include filename" (not include double quotes) directive to 
# load FastDHT server list, when the filename is a relative path such as 
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf

# if log to access log
# default value is false
# since V4.00
use_access_log = false

# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0

# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=

# the port of the web server on this storage server
http.server_port=8888




集群部署需要配置多个 tracker_server:
在这里插入图片描述

2.2、mod_fastdfs.conf配置

  • mod_fastdfs.conf主要修改以下地方,不集群只需要配置一个tracker_server:

tracker_server=fastdfs-v1-0.fastdfs.jianzhubao.svc.cluster.local:22122
tracker_server=fastdfs-v1-1.fastdfs.jianzhubao.svc.cluster.local:22122
tracker_server=fastdfs-v1-2.fastdfs.jianzhubao.svc.cluster.local:22122

# connect timeout in seconds
# default value is 30s
connect_timeout=2

# network recv and send timeout in seconds
# default value is 30s
network_timeout=30

# the base path to store log files
base_path=/tmp

# if load FastDFS parameters from tracker server
# since V1.12
# default value is false
load_fdfs_parameters_from_tracker=true

# storage sync file max delay seconds
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.12
# default value is 86400 seconds (one day)
storage_sync_file_max_delay = 86400

# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V1.13
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V1.13
storage_ids_filename = storage_ids.conf

# FastDFS tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
# valid only when load_fdfs_parameters_from_tracker is true
tracker_server=fastdfs-v1-0.fastdfs.jianzhubao.svc.cluster.local:22122
#tracker_server=fastdfs-v1-2.fastdfs.jianzhubao.svc.cluster.local:22122
#tracker_server=fastdfs-v1-3.fastdfs.jianzhubao.svc.cluster.local:22122

# the port of the local storage server
# the default value is 23000
storage_server_port=23000

# the group name of the local storage server
group_name=group1

# if the url / uri including the group name
# set to false when uri like /M00/00/00/xxx
# set to true when uri like ${group_name}/M00/00/00/xxx, such as group1/M00/xxx
# default value is false
url_have_group_name = true

# path(disk or mount point) count, default value is 1
# must same as storage.conf
store_path_count=1

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
# must same as storage.conf
store_path0=/home/dfs
#store_path1=/home/yuqing/fastdfs1

# standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

# set the log filename, such as /usr/local/apache2/logs/mod_fastdfs.log
# empty for output to stderr (apache and nginx error_log file)
log_filename=

# response mode when the file not exist in the local file system
## proxy: get the content from other storage server, then send to client
## redirect: redirect to the original storage server (HTTP Header is Location)
response_mode=proxy

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# this paramter used to get all ip address of the local host
# default values is empty
if_alias_prefix=

# use "#include" directive to include HTTP config file
# NOTE: #include is an include directive, do NOT remove the # before include
#include http.conf


# if support flv
# default value is false
# since v1.15
flv_support = true

# flv file extension name
# default value is flv
# since v1.15
flv_extension = flv


# set the group count
# set to none zero to support multi-group on this storage server
# set to 0  for single group only
# groups settings section as [group1], [group2], ..., [groupN]
# default value is 0
# since v1.14
group_count = 0

# group settings for group #1
# since v1.14
# when support multi-group on this storage server, uncomment following section
#[group1]
#group_name=group1
#storage_server_port=23000
#store_path_count=2
#store_path0=/home/yuqing/fastdfs
#store_path1=/home/yuqing/fastdfs1

# group settings for group #2
# since v1.14
# when support multi-group, uncomment following section as neccessary
#[group2]
#group_name=group2
#storage_server_port=23000
#store_path_count=1
#store_path0=/home/yuqing/fastdfs


在这里插入图片描述

2.3、tracker.conf配置

  • tracker.conf主要修改以下地方

store_group=group1

# is this config file disabled
# false for enabled
# true for disabled
disabled=false

# bind an address of this host
# empty for bind all addresses of this host
bind_addr=

# the tracker server port
port=22122

# connect timeout in seconds
# default value is 30s
connect_timeout=10

# network timeout in seconds
# default value is 30s
network_timeout=60

# the base path to store data and log files
base_path=/home/dfs

# max concurrent connections this server supported
# you should set this parameter larger, eg. 102400
max_connections=1024

# accept thread count
# default value is 1
# since V4.07
accept_threads=1

# work thread count, should <= max_connections
# default value is 4
# since V2.00
work_threads=4

# min buff size
# default value 8KB
min_buff_size = 8KB

# max buff size
# default value 128KB
max_buff_size = 128KB

# the method of selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
store_lookup=2

# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group=group1

# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
# Note: if use_trunk_file set to true, must set store_server to 1 or 2
store_server=0

# which path(means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
store_path=0

# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server=0

# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in 
# a group <= reserved_storage_space, 
# no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as reserved_storage_space = 10%
reserved_storage_space = 1%

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts=*

# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 10

# check storage server alive interval seconds
check_active_interval = 120

# thread stack size, should >= 64KB
# default value is 64KB
thread_stack_size = 64KB

# auto adjust when the ip address of the storage server changed
# default value is true
storage_ip_changed_auto_adjust = true

# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
storage_sync_file_max_delay = 86400

# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
storage_sync_file_max_time = 300

# if use a trunk file to store several small files
# default value is false
# since V3.00
use_trunk_file = false 

# the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
slot_min_size = 256

# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <=  this value
# default value is 16MB
# since V3.00
slot_max_size = 16MB

# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
trunk_file_size = 64MB

# if create trunk file advancely
# default value is false
# since V3.06
trunk_create_file_advance = false

# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00

# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400

# the threshold to create trunk file
# when the free trunk file size less than the threshold, will create 
# the trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G

# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces 
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false

# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false

# the min interval for compressing the trunk binlog file
# unit: second
# default value is 0, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# since V5.01
trunk_compress_binlog_min_interval = 0

# if use storage ID instead of IP address
# default value is false
# since V4.00
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# since V4.00
storage_ids_filename = storage_ids.conf

# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = ip

# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# HTTP port on this tracker server
http.server_port=8080

# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval=30

# check storage HTTP server alive type, values are:
#   tcp : connect to the storge server with HTTP port only, 
#        do not request and get response
#   http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type=tcp

# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri=/status.html


在这里插入图片描述

2.4、client.conf配置

  • client.conf主要修改以下地方,不集群只需要配置一个tracker_server:

tracker_server=fastdfs-v1-0.fastdfs.jianzhubao.svc.cluster.local:22122
tracker_server=fastdfs-v1-1.fastdfs.jianzhubao.svc.cluster.local:22122
tracker_server=fastdfs-v1-2.fastdfs.jianzhubao.svc.cluster.local:22122

# connect timeout in seconds
# default value is 30s
connect_timeout=30

# network timeout in seconds
# default value is 30s
network_timeout=60

# the base path to store log files
base_path=/home/dfs

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
tracker_server=fastdfs-v1-0.fastdfs.jianzhubao.svc.cluster.local:22122
#tracker_server=fastdfs-v1-1.fastdfs.jianzhubao.svc.cluster.local:22122
#tracker_server=fastdfs-v1-2.fastdfs.jianzhubao.svc.cluster.local:22122

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# if load FastDFS parameters from tracker server
# since V4.05
# default value is false
load_fdfs_parameters_from_tracker=false

# if use storage ID instead of IP address
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# default value is false
# since V4.05
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# same as tracker.conf
# valid only when load_fdfs_parameters_from_tracker is false
# since V4.05
storage_ids_filename = storage_ids.conf


#HTTP settings
http.tracker_server_port=8080

#use "#include" directive to include HTTP other settiongs
##include http.conf




在这里插入图片描述


三、创建fastdfs服务

3.1、创建一个有状态服务

在这里插入图片描述

3.2、创建容器组

在这里插入图片描述
在这里插入图片描述

3.3、创建存储和配置挂载

3.3.1、添加挂载卷

在这里插入图片描述

存储卷挂载目录:
/home/dfs

在这里插入图片描述

3.3.1、挂载配置文件

在这里插入图片描述

1、挂载storage.conf

挂载路径:/etc/fdfs/storage.conf
挂载子路径:storage.conf

在这里插入图片描述
在这里插入图片描述

2、挂载mod_fastdfs.conf

挂载路径:/etc/fdfs/mod_fastdfs.conf
挂载子路径:mod_fastdfs.conf

3、挂载tracker.conf

挂载路径:/etc/fdfs/tracker.conf
挂载子路径:tracker.conf

4、挂载client.conf

挂载路径:/etc/fdfs/client.conf
挂载子路径:client.conf

5、开始创建
在这里插入图片描述

3.4、校验集群是否成功

校验命令:/usr/bin/fdfs_monitor /etc/fdfs/storage.conf
he

3.5、测试上传文件和查看文件

3.5.1、在节点node1创建一个测试文件

vi hello.html
hello , wangzhen

3.5.3、上传文件:

/usr/bin/fdfs_test /etc/fdfs/client.conf upload  /hello.html

文件上传成功:

sh-4.2# /usr/bin/fdfs_test /etc/fdfs/client.conf upload hello.html
This is FastDFS client test program v6.08

Copyright (C) 2008, Happy Fish / YuQing

FastDFS may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDFS source kit.
Please visit the FastDFS Home Page http://www.fastken.com/
for more detail.

[2022-08-08 11:00:25] DEBUG - base_path=/home/dfs, connect_timeout=30, network_timeout=60, tracker_server_count=1,anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

tracker_query_storage_store_list_without_group:
        server 1. group_name=, ip_addr=10.233.76.166, port=23000

group_name=group1, ip_addr=10.233.76.166, port=23000
storage_upload_by_filename
group_name=group1, remote_filename=M00/00/00/CulMpmLwfEmAKTxgAAAAERI0BvE07.html
source ip address: 10.233.76.166
file timestamp=2022-08-08 11:00:25
file size=17
file crc32=305399537
example file url: http://10.233.76.166:8080/group1/M00/00/00/CulMpmLwfEmAKTxgAAAAERI0BvE07.html

3.5.4、查看上传的文件:

1、pod里面查询上传的文件

sh-4.2# cd /home/dfs/data/00/00
sh-4.2# ls
CulMpmLwfEmAKTxgAAAAERI0BvE07.html  CulMpmLwfEmAKTxgAAAAERI0BvE07_big.html

2、查看外挂数据文件:
服务器挂载成功

[root@k8s-master01 00]# ls /nfs/data/jianzhubao-fastdfs-pvc-fastdfs-v1-0-pvc-517f0a90-9e5d-418c-b7d7-a1bf24b4fb7a/data/00/00
CulMpmLwfEmAKTxgAAAAERI0BvE07_big.html  CulMpmLwfEmAKTxgAAAAERI0BvE07.html

四、配置外网访问

1、指定工作负载
在这里插入图片描述

2、配置服务设置

在这里插入图片描述

在这里插入图片描述

3、开始创建

在这里插入图片描述

4、浏览器访问成功

在这里插入图片描述

在这里插入图片描述


五、总结

至此,关于使用KubeSphere管理平台搭建一个的fastdfs单节点部署到这里就结束了。作者制作不易,别忘了点赞、关注、加收藏哦,我们下期见。。。


六、其他文章传送门

第一章 KubeSphere 3.3.0 + Seata 1.5.2 + Nacos 2.1.0 (nacos集群模式)
第二章 KubeSphere3.3.0 + Nacos 2.1.0 (集群部署)
第三章 KubeSphere3.3.0 + Sentinel 1.8.4 + Nacos 2.1.0 集群部署
第四章 KubeSphere3.3.0 + Redis7.0.4 + Redis-Cluster 集群部署
第五章 KubeSphere3.3.0 + MySQL8.0.25 集群部署
第六章 KubeSphere3.3.0 安装部署 + KubeKey2.2.1(kk)创建集群
第七章 KubeSphere3.3.0 + MySQL8.0 单节点部署
第八章 KubeSphere3.3.0 + Redis7.0.4 单节点部署
第九章 KubeSphere3.3.0 + Nacos2.1.0 单节点部署
第十章 KubeSphere3.3.0 + FastDFS6.0.8 部署
第十一章 KubeSphere3.4.1 + MinIO:2024.3.15 部署

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值