IP变更导致fdfs文件上传服务不可用解决流程

一、问题:

公司新拉了一条专线,外网IP和内网IP都变更了。导致文件服务器直接扑该了,无法预览也无法上传。我们文件服务器通过docker启动的fastdfs容器。
装个命令提示工具

yun install bash-completion -y

二、排查问题,进去看看fastds里面到底有什么

1、查看fastdfs文件服务器docker容器

[root@file-ser-1-225 3B]# docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS               NAMES
379036e21744        morunchang/fastdfs   "sh storage.sh"     22 hours ago        Up 22 hours                             storage
348d8b43bc3f        morunchang/fastdfs   "sh tracker.sh"     22 hours ago        Up 22 hours                             tracker

fastdfs文件服务器分为storage和tracker两部分。tracker就是调度器,storage就是执行器。那意思就是远程客户端请求文件服务器就是先连接tracker调度器,调度器将选举出来的storage执行器返回给客户端,客户端再去请求storage执行文件操作的请求。
说明这个fastdfs还是一个设计出来就支持分布式部署的工具。
去hub.docker.com搜一下这个镜像:

https://hub.docker.com/search?context=explore&q=fastdfs

morunchang/fastdfs这个镜像是比较老的了。
在这里插入图片描述
点进去看看这个镜像
在这里插入图片描述

docker pull morunchang/fastdfs
#Run as a tracker
docker run -d --name tracker --net=host morunchang/fastdfs sh tracker.sh
#Run as a storage server
docker run -d --name storage --net=host -e TRACKER_IP=<your tracker server address>:22122 -e GROUP_NAME=<group name> morunchang/

在这里插入图片描述
这里的tracker和group name是需要自定义的,自己改。

docker pull morunchang/fastdfs
#Run as a tracker
docker run -d --name tracker --net=host morunchang/fastdfs sh tracker.sh
#Run as a storage server
docker run -d --name storage --net=host -v /opt/fastdfs/data:/data/fast_data -v /etc/localtime:/etc/localtime -e TRACKER_IP=192.168.101.225:22122 -e GROUP_NAME=group1 morunchang/
[root@file-ser-1-225 3B]# docker logs -f --tail 100 storage

正常情况下内网ip是没有问题的,我这里启动的时候指定了外网ip就无法启动storage。
在这里插入图片描述
在这里插入图片描述
在搭建的过程中主要出现这几个错误信息:
-----------response status 2 != 0

[2022-07-06 09:23:33] ERROR - file: tracker_proto.c, line: 48, server: 192.168.101.225:22122, response status 2 != 0
[2022-07-06 09:23:33] ERROR - file: tracker_proto.c, line: 48, server: 192.168.101.225:22122, response status 2 != 0
[2022-07-06 09:23:33] ERROR - file: sockopt.c, line: 867, bind port 23000 failed, errno: 98, error info: Address already in use.

-----------Transport endpoint is not connected
121.218.45.164:22502这里是我启动容器的时候指定的外网地址,对应到内网的也是22122

tracker server 121.218.45.164:22502, recv data fail, errno: 107, error info: Transport endpoint is not connected

[2022-07-06 15:07:36] ERROR - file: storage_nio.c, line: 282, client ip: 192.168.101.3, recv timeout, recv offset: 0, expect length: 0
[2022-07-06 15:07:36] ERROR - file: storage_nio.c, line: 282, client ip: 192.168.101.3, recv timeout, recv offset: 0, expect length: 0

正常情况下进入storage容器查看一下执行器是否正常是这样的
在这里插入图片描述

[root@localhost ~]# docker exec -it storage /bin/bash
[root@localhost fdfs]# fdfs_monitor /etc/fdfs/storage.conf
[2022-07-07 09:19:34] DEBUG - base_path=/var/fdfs, connect_timeout=5, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

server_count=1, server_index=0

tracker server is 192.168.101.197:22122

group count: 1

Group 1:
group name = group1
disk total space = 303,478 MB
disk free space = 286,130 MB
trunk free space = 0 MB
storage server count = 2
active server count = 1
storage server port = 23003
storage HTTP port = 8080
store path count = 1
subdir count per path = 256
current write server index = 0
current trunk file id = 0

	Storage 1:
		id = 192.168.101.197
		ip_addr = 192.168.101.197  ACTIVE
		http domain = 
		version = 6.06
		join time = 2021-05-26 10:46:37
		up time = 2022-07-05 15:30:00
		total storage = 303,478 MB
		free storage = 286,130 MB
		upload priority = 10
		store_path_count = 1
		subdir_count_per_path = 256
		storage_port = 23003
		storage_http_port = 8080
		current_write_path = 0
		source storage id = 
		if_trunk_server = 0
		connection.alloc_count = 256
		connection.current_count = 0
		connection.max_count = 0
		total_upload_count = 143565
		success_upload_count = 143565
		total_append_count = 0
		success_append_count = 0
		total_modify_count = 0
		success_modify_count = 0
		total_truncate_count = 0
		success_truncate_count = 0
		total_set_meta_count = 142009
		success_set_meta_count = 142009
		total_delete_count = 0
		success_delete_count = 0
		total_download_count = 1
		success_download_count = 1
		total_get_meta_count = 0
		success_get_meta_count = 0
		total_create_link_count = 0
		success_create_link_count = 0
		total_delete_link_count = 0
		success_delete_link_count = 0
		total_upload_bytes = 9141535911
		success_upload_bytes = 9141535911
		total_append_bytes = 0
		success_append_bytes = 0
		total_modify_bytes = 0
		success_modify_bytes = 0
		stotal_download_bytes = 181834
		success_download_bytes = 181834
		total_sync_in_bytes = 0
		success_sync_in_bytes = 0
		total_sync_out_bytes = 0
		success_sync_out_bytes = 0
		total_file_open_count = 143566
		success_file_open_count = 143566
		total_file_read_count = 1
		success_file_read_count = 1
		total_file_write_count = 159006
		success_file_write_count = 159006
		last_heart_beat_time = 2022-07-07 09:19:32
		last_source_update = 2022-06-28 17:40:04
		last_sync_update = 1970-01-01 08:00:00
		last_synced_timestamp = 1970-01-01 08:00:00

fdfs配置文件存储地址:

[root@localhost fdfs]# ll /etc/fdfs/
total 104
-rw-r--r--. 1 root root  1521 Jul  5 15:30 client.conf
-rw-r--r--. 1 root root  1909 Apr 27  2020 client.conf.sample
-rw-r--r--. 1 root root   955 Apr 27  2020 http.conf
-rw-r--r--. 1 root root 31172 Apr 27  2020 mime.types
-rw-r--r--. 1 root root  3726 Jul  5 15:30 mod_fastdfs.conf
-rw-r--r--. 1 root root 10244 Jul  5 15:30 storage.conf
-rw-r--r--. 1 root root 10246 Apr 27  2020 storage.conf.sample
-rw-r--r--. 1 root root   105 Apr 27  2020 storage_ids.conf
-rw-r--r--. 1 root root   620 Apr 27  2020 storage_ids.conf.sample
-rw-r--r--. 1 root root  9122 Apr 27  2020 tracker.conf
-rw-r--r--. 1 root root  9138 Apr 27  2020 tracker.conf.sample

网上查资料说这里的storage.conf里面配置也要看一下
tracker_server=192.168.101.197:22122,这里配置外网地址,其实也是没有解决。

[root@localhost conf.d]# cat /etc/fdfs/storage.conf
# is this config file disabled
# false for enabled
# true for disabled
disabled = false

# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configured correctly.
group_name = group1

# bind an address of this host
# empty for bind all addresses of this host
bind_addr =

# if bind an address of this host when connect to other servers 
# (this storage server as a client)
# true for binding the address configured by the above parameter: "bind_addr"
# false for binding any address of this host
client_bind = true

# the storage server port
port = 23003

# connect timeout in seconds
# default value is 30
# Note: in the intranet network (LAN), 2 seconds is enough.
connect_timeout = 5

# network timeout in seconds for send and recv
# default value is 30
network_timeout = 60

# the heart beat interval in seconds
# the storage server send heartbeat to tracker server periodically
# default value is 30
heart_beat_interval = 30

# disk usage report interval in seconds
# the storage server send disk usage report to tracker server periodically
# default value is 300
stat_report_interval = 60

# the base path to store data and log files
base_path=/var/fdfs

# max concurrent connections the server supported,
# you should set this parameter larger, eg. 10240
# default value is 256
max_connections = 1024

# the buff size to recv / send data from/to network
# this parameter must more than 8KB
# 256KB or 512KB is recommended
# default value is 64KB
# since V2.00
buff_size = 256KB

# accept thread count
# default value is 1 which is recommended
# since V4.07
accept_threads = 1

# work thread count
# work threads to deal network io
# default value is 4
# since V2.00
work_threads = 4

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true

# disk reader thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1

# disk writer thread count per store path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec = 50

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval = 0

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time = 00:00

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time = 23:59

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq = 500

# disk recovery thread count
# default value is 1
# since V6.04
disk_recovery_threads = 3

# store path (disk or mount point) count, default value is 1
store_path_count = 1

# store_path#, based on 0, to configure the store paths to store files
# if store_path0 not exists, it's value is base_path (NOT recommended)
# the paths must be exist.
#
# IMPORTANT NOTE:
#       the store paths' order is very important, don't mess up!!!
#       the base_path should be independent (different) of the store paths

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/var/fdfs
#store_path1=/var/fdfs2

# subdir_count  * subdir_count directories will be auto created under each 
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path = 256

# tracker_server can ocur more than once for multi tracker servers.
# the value format of tracker_server is "HOST:PORT",
#   the HOST can be hostname or ip address,
#   and the HOST can be dual IPs or hostnames seperated by comma,
#   the dual IPS must be an inner (intranet) IP and an outer (extranet) IP,
#   or two different types of inner (intranet) IPs.
#   for example: 192.168.2.100,122.244.141.46:22122
#   another eg.: 192.168.1.10,172.17.4.21:22122

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
tracker_server=192.168.101.197:22122

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level = info

#unix group name to run this program, 
#not set (empty) means run by the group of current user
run_by_group =

#unix username to run this program,
#not set (empty) means run by current user
run_by_user =

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts = *

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode = 0

# valid when file_distribute_to_path is set to 0 (round robin).
# when the written file count reaches this number, then rotate to next path.
# rotate to the first path (00/00) after the last path (such as FF/FF).
# default value is 100
file_distribute_rotate_count = 100

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes = 0

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval = 1

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval = 1

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval = 300

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size = 512KB

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority = 10

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix =

# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate = 0

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method = hash

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace = FastDFS

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive = 0

# you can use "#include filename" (not include double quotes) directive to 
# load FastDHT server list, when the filename is a relative path such as 
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf

# if log to access log
# default value is false
# since V4.00
use_access_log = false

# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time = 00:00

# if compress the old access log by gzip
# default value is false
# since V6.04
compress_old_access_log = false

# compress the access log days before
# default value is 1
# since V6.04
compress_access_log_days_before = 7

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time = 00:00

# if compress the old error log by gzip
# default value is false
# since V6.04
compress_old_error_log = false

# compress the error log days before
# default value is 1
# since V6.04
compress_error_log_days_before = 7

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0

# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record = false

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# if compress the binlog files by gzip
# default value is false
# since V6.01
compress_binlog = true

# try to compress binlog time, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 01:30
# since V6.01
compress_binlog_time = 01:30

# if check the mark of store path to prevent confusion
# recommend to set this parameter to true
# if two storage servers (instances) MUST use a same store path for
# some specific purposes, you should set this parameter to false
# default value is true
# since V6.03
check_store_path_mark = true

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=

# the port of the web server on this storage server
http.server_port=8080

fafs系统脚本命令位置:

[root@localhost fdfs]# ll /usr/bin/
total 61848
-rwxr-xr-x.   1 root root        26 Aug  8  2019 fc
-rwxr-xr-x.   1 root root    348824 Apr 27  2020 fdfs_append_file
-rwxr-xr-x.   1 root root    362168 Apr 27  2020 fdfs_appender_test
-rwxr-xr-x.   1 root root    361944 Apr 27  2020 fdfs_appender_test1
-rwxr-xr-x.   1 root root    348440 Apr 27  2020 fdfs_crc32
-rwxr-xr-x.   1 root root    348856 Apr 27  2020 fdfs_delete_file
-rwxr-xr-x.   1 root root    349592 Apr 27  2020 fdfs_download_file
-rwxr-xr-x.   1 root root    349544 Apr 27  2020 fdfs_file_info
-rwxr-xr-x.   1 root root    364864 Apr 27  2020 fdfs_monitor
-rwxr-xr-x.   1 root root    349080 Apr 27  2020 fdfs_regenerate_filename
-rwxr-xr-x.   1 root root   1280064 Apr 27  2020 fdfs_storaged
-rwxr-xr-x.   1 root root    372032 Apr 27  2020 fdfs_test
-rwxr-xr-x.   1 root root    367152 Apr 27  2020 fdfs_test1
-rwxr-xr-x.   1 root root    512296 Apr 27  2020 fdfs_trackerd
-rwxr-xr-x.   1 root root    349784 Apr 27  2020 fdfs_upload_appender
-rwxr-xr-x.   1 root root    350800 Apr 27  2020 fdfs_upload_file

fdfs服务器部署时出错:Failed to start fdfs_trackerd.service: Unit fdfs_trackerd.service not found.
解决方案:

sudo /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start
sudo /usr/bin/fdfs_storaged /etc/fdfs/storage.conf start

ip更改后fastfds如何更改配置
1.修改/etc/fdfs下client.conf和storaged.conf中的tracker_server的ip地址,更改为你现在的ip地址
2.尝试启动,如果tracker没有启动起来,则去更改 【文件库】基地址/tracker/data (就是log日志所在目录的兄弟目录)下的storage_servers_new.dat与storage_sync_timestamp.dat,将2者的ip地址对应即可。
参考了很多博客的解决办法,基本上没有解决,ip更更删除容器重启后重新指定也不太行:
https://blog.csdn.net/xxwwh/article/details/121038817?utm_medium=distribute.pc_relevant.none-task-blog-2defaultbaidujs_title~default-4-121038817-blog-105245972.pc_relevant_multi_platform_whitelistv1&spm=1001.2101.3001.4242.3&utm_relevant_index=7
排查一下磁盘的目录是否满了,我这里就遇到pool满了的问题
df -h
在这里插入图片描述
既然fastdfs有自带的nginx,进去nginx配置里面看看

[root@localhost conf]# cd /usr/local/nginx/conf
[root@localhost conf]# cat nginx.conf

user  nobody;
worker_processes  2;

error_log  logs/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include conf.d/storage.conf;
}

include conf.d/storage.conf;
容器里面的数据目录:/var/fdfs/data/

[root@localhost conf]# cd conf.d/
[root@localhost conf.d]# ll
total 8
-rw-r--r--. 1 root root 248 Jul  5 15:30 storage.conf
-rw-r--r--. 1 root root 381 Jul  3 15:45 storage_https.conf
[root@localhost conf.d]# cat storage.conf 
# storage.conf
server {
    listen       8080 ;
    #server_name  _ ;

    location / {
        root   html;
        index  index.html index.htm;
    }

    location ~/group1/ {
        alias   /var/fdfs/data/;
        ngx_fastdfs_module;
    }

}

2、自定义docker-compose
最后想通过挂载的方式将配置文件挂载出来,在外网修改。最后发生storage怎么改,容器重启以后都会被还原。

将容器至于同一个网络环境下。查看是否是网络的问题
将之前的docker启动的容器直接删掉。将文件压缩备份一下。

tar cfz fdsdate.tgz /opt/fastdfs

docker-compose安装

sudo curl -L https://raw.githubusercontent.com/docker/compose/2.6.0/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose

docker-compose ps
docker-compose version
docker-compose help

chmod +x /usr/local/bin/docker-compose

docker-compose restart tracker
docker-compose restart storage
docker-compose ps
firewall-cmd --zone=public --add-port=22122/tcp --permanent 
firewall-cmd --reload 
netstat -anotpul
systemctl restart firewalld

#运行
docker-compose up -d
#删除容器
docker-compose down

脚本

#  docker run -d --name tracker --net=host morunchang/fastdfs sh tracker.sh
version: '3'
services:
  tracker:
    image: morunchang/fastdfs
    restart: always
    hostname: tracker
    command: ['sh','tracker.sh']
    privileged: true
    network_mode: host
#    ports:
#    - 22122:22122
    volumes:
    - /etc/localtime:/etc/localtime
#    - ./tracker.conf:/etc/fdfs/tracker.conf
#  docker run -d --name storage --net=host -v /opt/fastdfs/data:/data/fast_data \
#  -v /etc/localtime:/etc/localtime -e TRACKER_IP=tracker:22122 -e GROUP_NAME=group1 morunchang/fastdfs sh storage.sh
  storage:
    image: morunchang/fastdfs
    restart: always
    hostname: storage
    privileged: true
    network_mode: host
    environment:
#      TRACKER_IP: 'tracker:22122'
#      TRACKER_IP: 'www.javalman.top:22122'
      TRACKER_IP: '192.168.101.225:22122'
      GROUP_NAME: group1
    command: ['sh','storage.sh']
#    ports:
#    - 23000:23000
#    - 8080:8080
    volumes:
    - /opt/fastdfs/data:/data/fast_data
#    - /home/fastdfs_db/fastdfsdata:/data/fast_data
    - /etc/localtime:/etc/localtime
#    - ./storage.conf:/etc/fdfs/storage.conf
#    - ./client.conf:/etc/fdfs/client.conf
    depends_on:
      - tracker
# 进入 storage容器:执行:apt-get update && apt-get install iptables && iptables -t nat -A POSTROUTING -p tcp -m tcp --dport 22122 -d 容器ip -j SNAT --to 外网ip
#  apt-get update && apt-get install iptables && iptables -t nat -A POSTROUTING -p tcp -m tcp --dport 22122 -d 172.19.0.3 -j SNAT --to 124.90.128.71

启动脚本:

docker-compose up -d

[root@file-ser-1-225 3B]# docker  ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS               NAMES
b0b9757177a9        morunchang/fastdfs   "sh storage.sh"     24 hours ago        Up 24 hours                             home-storage-1
0534912e471b        morunchang/fastdfs   "sh tracker.sh"     24 hours ago        Up 24 hours                             home-tracker-1

最终问题可能出在网络方面。现在预览是完全好的,直接将80端口暴漏出去,内网外网都可以访问。但是tracker-list配置内网地址,在内网服务器上可以访问和上传,配置外网地址就无法上传。
![、(https://img-blog.csdnimg.cn/607b5e2fe1cb48ab8589b4017b704637.png)

三、最终方案-服务器迁移

直接迁移到阿里云,大概70个G的文件,迁移时间一个下午。
文件服务的地址是拼接域名和文件地址。所以整个目录迁移过去。创建文件桶,创建/group1/M00.将整个挂载出来的文件目录通过阿里云提供的工具cp出去。
在这里插入图片描述

[root@file-ser-1-225 home]# cd 111.txt/
[root@file-ser-1-225 111.txt]# touch 111.txt
[root@file-ser-1-225 111.txt]# 
[root@file-ser-1-225 111.txt]# 
[root@file-ser-1-225 111.txt]# cd ..
[root@file-ser-1-225 home]# ./ossutil64 cp -r 111.txt/ oss://longyanglao/
Succeed: Total num: 1, size: 0. OK num: 1(upload 1 files).

average speed 0(byte/s)

0.125076(s) elapsed
[root@file-ser-1-225 home]# ./ossutil64 cp -r /opt/fastdfs/data/data oss://longyanglao/group1/M00/
Succeed: Total num: 226393, size: 78,860,068,548. OK num: 226393(upload 160600 files, 65793 directories).                                                           

average speed 5975000(byte/s)

13196.677786(s) elapsed

如果老的文件域名不能换,直接通过域名绑定的方式,也可以通过原来的域名解析到阿里云的域名,无感知的文件迁移。
文档:https://help.aliyun.com/document_detail/120075.html
老的文件域名配置CNAME,解析到阿里云的外网文件域名地址。
在这里插入图片描述

四、经过后期排查

发现是路由器的配置的问题。
https://support.huawei.com/enterprise/zh/knowledge/EKB1000117843#contentCause

/** * Copyright (C) 2008 Happy Fish / YuQing * * FastDFS may be copied only under the terms of the GNU General * Public License V3, which may be found in the FastDFS source kit. * Please visit the FastDFS Home Page http://www.fastken.com/ for more detail. **/ #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #include <limits.h> #include <time.h> #include <unistd.h> #include "fastcommon/logger.h" #include "fastcommon/shared_func.h" #include "fastcommon/sockopt.h" #include "fastcommon/http_func.h" #include "fastcommon/local_ip_func.h" #include "fastdfs/fdfs_define.h" #include "fastdfs/fdfs_global.h" #include "fastdfs/fdfs_http_shared.h" #include "fastdfs/fdfs_client.h" #include "fastdfs/fdfs_shared_func.h" #include "fastdfs/trunk_shared.h" #include "common.h" #define FDFS_MOD_REPONSE_MODE_PROXY 'P' #define FDFS_MOD_REPONSE_MODE_REDIRECT 'R' #define FDFS_CONTENT_TYPE_TAG_STR "Content-type: " #define FDFS_CONTENT_TYPE_TAG_LEN (sizeof(FDFS_CONTENT_TYPE_TAG_STR) - 1) #define FDFS_CONTENT_RANGE_TAG_STR "Content-range: " #define FDFS_CONTENT_RANGE_TAG_LEN (sizeof(FDFS_CONTENT_RANGE_TAG_STR) - 1) static char flv_header[] = "FLV\x1\x1\0\0\0\x9\0\0\0\x9"; #define FDFS_RANGE_LENGTH(range) ((range.end - range.start) + 1) typedef struct tagGroupStorePaths { char group_name[FDFS_GROUP_NAME_MAX_LEN + 1]; int group_name_len; int storage_server_port; FDFSStorePaths store_paths; } GroupStorePaths; static int storage_server_port = FDFS_STORAGE_SERVER_DEF_PORT; static int my_group_name_len = 0; static int group_count = 0; //for multi groups static bool url_have_group_name = false; static bool use_storage_id = false; static bool flv_support = false; //if support flv static char flv_extension[FDFS_FILE_EXT_NAME_MAX_LEN + 1] = {0}; //flv extension name static int flv_ext_len = 0; //flv extension length static char my_group_name[FDFS_GROUP_NAME_MAX_LEN + 1] = {0}; static char response_mode = FDFS_MOD_REPONSE_MODE_PROXY; static GroupStorePaths *group_store_paths = NULL; //for multi groups static FDFSHTTPParams g_http_params; static int storage_sync_file_max_delay = 24 * 3600; static int fdfs_get_params_from_tracker(); static int fdfs_format_http_datetime(time_t t, char *buff, const int buff_size); static int fdfs_strtoll(const char *s, int64_t *value) { char *end = NULL; *value = strtoll(s, &end, 10); if (end != NULL && *end != '\0') { return EINVAL; } return 0; } static int fdfs_load_groups_store_paths(IniContext *pItemContext) { char section_name[64]; char *pGroupName; int bytes; int result; int i; bytes = sizeof(GroupStorePaths) * group_count; group_store_paths = (GroupStorePaths *)malloc(bytes); if (group_store_paths == NULL) { logError("file: "__FILE__", line: %d, " \ "malloc %d bytes fail, " \ "errno: %d, error info: %s", \ __LINE__, bytes, errno, STRERROR(errno)); return errno != 0 ? errno : ENOMEM; } for (i=0; i<group_count; i++) { sprintf(section_name, "group%d", i + 1); pGroupName = iniGetStrValue(section_name, "group_name", \ pItemContext); if (pGroupName == NULL) { logError("file: "__FILE__", line: %d, " \ "section: %s, you must set parameter: " \ "group_name!", __LINE__, section_name); return ENOENT; } group_store_paths[i].storage_server_port = iniGetIntValue( \ section_name, "storage_server_port", pItemContext, \ FDFS_STORAGE_SERVER_DEF_PORT); group_store_paths[i].group_name_len = snprintf( \ group_store_paths[i].group_name, \ sizeof(group_store_paths[i].group_name), \ "%s", pGroupName); if (group_store_paths[i].group_name_len == 0) { logError("file: "__FILE__", line: %d, " \ "section: %s, parameter: group_name " \ "can't be empty!", __LINE__, section_name); return EINVAL; } group_store_paths[i].store_paths.paths = \ storage_load_paths_from_conf_file_ex(pItemContext, \ section_name, false, &group_store_paths[i].store_paths.count, \ &result); if (result != 0) { return result; } } return 0; } int fdfs_mod_init() { IniContext iniContext; int result; int len; int i; char *pLogFilename; char *pReponseMode; char *pIfAliasPrefix; char buff[2 * 1024]; bool load_fdfs_parameters_from_tracker = false; log_init(); trunk_shared_init(); if ((result=iniLoadFromFile(FDFS_MOD_CONF_FILENAME, &iniContext)) != 0) { logError("file: "__FILE__", line: %d, " \ "load conf file \"%s\" fail, ret code: %d", \ __LINE__, FDFS_MOD_CONF_FILENAME, result); return result; } do { group_count = iniGetIntValue(NULL, "group_count", &iniContext, 0); if (group_count < 0) { logError("file: "__FILE__", line: %d, " \ "conf file: %s, group_count: %d is invalid!", \ __LINE__, FDFS_MOD_CONF_FILENAME, group_count); return EINVAL; } url_have_group_name = iniGetBoolValue(NULL, "url_have_group_name", \ &iniContext, false); if (group_count > 0) { if (!url_have_group_name) { logError("file: "__FILE__", line: %d, " \ "config file: %s, you must set " \ "url_have_group_name to true to " \ "support multi-group!", \ __LINE__, FDFS_MOD_CONF_FILENAME); result = ENOENT; break; } if ((result=fdfs_load_groups_store_paths(&iniContext)) != 0) { break; } } else { char *pGroupName; pGroupName = iniGetStrValue(NULL, "group_name", &iniContext); if (pGroupName == NULL) { logError("file: "__FILE__", line: %d, " \ "config file: %s, you must set parameter: " \ "group_name!", __LINE__, FDFS_MOD_CONF_FILENAME); result = ENOENT; break; } my_group_name_len = snprintf(my_group_name, \ sizeof(my_group_name), "%s", pGroupName); if (my_group_name_len == 0) { logError("file: "__FILE__", line: %d, " \ "config file: %s, parameter: group_name " \ "can't be empty!", __LINE__, \ FDFS_MOD_CONF_FILENAME); result = EINVAL; break; } if ((result=storage_load_paths_from_conf_file(&iniContext)) != 0) { break; } } FDFS_CONNECT_TIMEOUT = iniGetIntValue(NULL, "connect_timeout", \ &iniContext, DEFAULT_CONNECT_TIMEOUT); if (FDFS_CONNECT_TIMEOUT <= 0) { FDFS_CONNECT_TIMEOUT = DEFAULT_CONNECT_TIMEOUT; } FDFS_NETWORK_TIMEOUT = iniGetIntValue(NULL, "network_timeout", \ &iniContext, DEFAULT_NETWORK_TIMEOUT); if (FDFS_NETWORK_TIMEOUT <= 0) { FDFS_NETWORK_TIMEOUT = DEFAULT_NETWORK_TIMEOUT; } load_log_level(&iniContext); pLogFilename = iniGetStrValue(NULL, "log_filename", &iniContext); if (pLogFilename != NULL && *pLogFilename != '\0') { if ((result=log_set_filename(pLogFilename)) != 0) { break; } } storage_server_port = iniGetIntValue(NULL, "storage_server_port", \ &iniContext, FDFS_STORAGE_SERVER_DEF_PORT); if ((result=fdfs_http_params_load(&iniContext, FDFS_MOD_CONF_FILENAME, \ &g_http_params)) != 0) { break; } pReponseMode = iniGetStrValue(NULL, "response_mode", &iniContext); if (pReponseMode != NULL) { if (strcmp(pReponseMode, "redirect") == 0) { response_mode = FDFS_MOD_REPONSE_MODE_REDIRECT; } } pIfAliasPrefix = iniGetStrValue (NULL, "if_alias_prefix", &iniContext); if (pIfAliasPrefix == NULL) { *g_if_alias_prefix = '\0'; } else { snprintf(g_if_alias_prefix, sizeof(g_if_alias_prefix), "%s", pIfAliasPrefix); } load_fdfs_parameters_from_tracker = iniGetBoolValue(NULL, \ "load_fdfs_parameters_from_tracker", \ &iniContext, false); if (load_fdfs_parameters_from_tracker) { result = fdfs_load_tracker_group_ex(&g_tracker_group, \ FDFS_MOD_CONF_FILENAME, &iniContext); } else { storage_sync_file_max_delay = iniGetIntValue(NULL, \ "storage_sync_file_max_delay", \ &iniContext, 24 * 3600); use_storage_id = iniGetBoolValue(NULL, "use_storage_id", \ &iniContext, false); if (use_storage_id) { result = fdfs_load_storage_ids_from_file( \ FDFS_MOD_CONF_FILENAME, &iniContext); } } } while (false); flv_support = iniGetBoolValue(NULL, "flv_support", \ &iniContext, false); if (flv_support) { char *flvExtension; flvExtension = iniGetStrValue (NULL, "flv_extension", \ &iniContext); if (flvExtension == NULL) { flv_ext_len = sprintf(flv_extension, "flv"); } else { flv_ext_len = snprintf(flv_extension, \ sizeof(flv_extension), "%s", flvExtension); } } iniFreeContext(&iniContext); if (result != 0) { return result; } load_local_host_ip_addrs(); if (load_fdfs_parameters_from_tracker) { fdfs_get_params_from_tracker(); } if (group_count > 0) { len = sprintf(buff, "group_count=%d, ", group_count); } else { len = sprintf(buff, "group_name=%s, storage_server_port=%d, " \ "path_count=%d, ", my_group_name, \ storage_server_port, g_fdfs_store_paths.count); for (i=0; i<g_fdfs_store_paths.count; i++) { len += snprintf(buff + len, sizeof(buff) - len, \ "store_path%d=%s, ", i, \ g_fdfs_store_paths.paths[i].path); } } logInfo("fastdfs apache / nginx module v1.21, " "response_mode=%s, " "base_path=%s, " "url_have_group_name=%d, " "%s" "connect_timeout=%d, " "network_timeout=%d, " "tracker_server_count=%d, " "if_alias_prefix=%s, " "local_host_ip_count=%d, " "anti_steal_token=%d, " "token_ttl=%ds, " "anti_steal_secret_key length=%d, " "token_check_fail content_type=%s, " "token_check_fail buff length=%d, " "load_fdfs_parameters_from_tracker=%d, " "storage_sync_file_max_delay=%ds, " "use_storage_id=%d, storage server id/ip count=%d / %d, " "flv_support=%d, flv_extension=%s", response_mode == FDFS_MOD_REPONSE_MODE_PROXY ? "proxy" : "redirect", FDFS_BASE_PATH_STR, url_have_group_name, buff, FDFS_CONNECT_TIMEOUT, FDFS_NETWORK_TIMEOUT, g_tracker_group.server_count, g_if_alias_prefix, g_local_host_ip_count, g_http_params.anti_steal_token, g_http_params.token_ttl, g_http_params.anti_steal_secret_key.length, g_http_params.token_check_fail_content_type, g_http_params.token_check_fail_buff.length, load_fdfs_parameters_from_tracker, storage_sync_file_max_delay, use_storage_id, g_storage_ids_by_id.count, g_storage_ids_by_ip.count, flv_support, flv_extension); if (group_count > 0) { int k; for (k=0; k<group_count; k++) { len = 0; *buff = '\0'; for (i=0; i<group_store_paths[k].store_paths.count; i++) { len += snprintf(buff + len, sizeof(buff) - len, \ ", store_path%d=%s", i, \ group_store_paths[k].store_paths.paths[i].path); } logInfo("group %d. group_name=%s, " \ "storage_server_port=%d, path_count=%d%s", \ k + 1, group_store_paths[k].group_name, \ storage_server_port, group_store_paths[k]. \ store_paths.count, buff); } } //print_local_host_ip_addrs(); return 0; } #define OUTPUT_HEADERS(pContext, pResponse, http_status) \ do { \ (pResponse)->status = http_status; \ pContext->output_headers(pContext->arg, pResponse); \ } while (0) static int fdfs_send_boundary(struct fdfs_http_context *pContext, struct fdfs_http_response *pResponse, const bool bLast) { int result; if ((result=pContext->send_reply_chunk(pContext->arg, false, "\r\n--", 4)) != 0) { return result; } if ((result=pContext->send_reply_chunk(pContext->arg, false, pResponse->boundary, pResponse->boundary_len)) != 0) { return result; } if (bLast) { result = pContext->send_reply_chunk(pContext->arg, true, "--\r\n", 4); } else { result = pContext->send_reply_chunk(pContext->arg, false, "\r\n", 2); } return result; } static int fdfs_send_range_subheader(struct fdfs_http_context *pContext, struct fdfs_http_response *pResponse, const int index) { char buff[256]; int len; len = snprintf(buff, sizeof(buff), "%s%s\r\n%s%s\r\n\r\n", FDFS_CONTENT_TYPE_TAG_STR, pResponse->range_content_type, FDFS_CONTENT_RANGE_TAG_STR, pResponse->content_ranges[index].content); return pContext->send_reply_chunk(pContext->arg, false, buff, len); } static int fdfs_download_callback(void *arg, const int64_t file_size, \ const char *data, const int current_size) { struct fdfs_download_callback_args *pCallbackArgs; int result; bool bLast; pCallbackArgs = (struct fdfs_download_callback_args *)arg; if (!pCallbackArgs->pResponse->header_outputed) { if (!(pCallbackArgs->pContext->if_range && pCallbackArgs->pContext->range_count > 1)) { pCallbackArgs->pResponse->content_length = file_size; } OUTPUT_HEADERS(pCallbackArgs->pContext, pCallbackArgs->pResponse, HTTP_OK); } if (pCallbackArgs->pContext->if_range && pCallbackArgs-> pContext->range_count > 1) { bLast = false; if (pCallbackArgs->sent_bytes == 0) { if ((result=fdfs_send_boundary(pCallbackArgs->pContext, pCallbackArgs->pResponse, false)) != 0) { return result; } if ((result=fdfs_send_range_subheader(pCallbackArgs->pContext, pCallbackArgs->pResponse, pCallbackArgs->range_index)) != 0) { return result; } } } else { bLast = true; } pCallbackArgs->sent_bytes += current_size; return pCallbackArgs->pContext->send_reply_chunk( pCallbackArgs->pContext->arg, (pCallbackArgs->sent_bytes == file_size && bLast) ? 1 : 0, data, current_size); } static void fdfs_do_format_range(const struct fdfs_http_range *range, struct fdfs_http_response *pResponse) { if (range->start < 0) { pResponse->range_len += sprintf(pResponse->range + pResponse->range_len, \ "%"PRId64, range->start); } else if (range->end == 0) { pResponse->range_len += sprintf(pResponse->range + pResponse->range_len, \ "%"PRId64"-", range->start); } else { pResponse->range_len += sprintf(pResponse->range + pResponse->range_len, \ "%"PRId64"-%"PRId64, \ range->start, range->end); } } static void fdfs_format_range(struct fdfs_http_context *pContext, struct fdfs_http_response *pResponse) { int i; pResponse->range_len = sprintf(pResponse->range, "%s", "bytes="); for (i=0; i<pContext->range_count; i++) { if (i > 0) { *(pResponse->range + pResponse->range_len) = ','; pResponse->range_len++; } fdfs_do_format_range(pContext->ranges + i, pResponse); } } static void fdfs_do_format_content_range(const struct fdfs_http_range *range, const int64_t file_size, struct fdfs_http_resp_content_range *content_range) { content_range->length = sprintf(content_range->content, "bytes %"PRId64"-%"PRId64"/%"PRId64, range->start, range->end, file_size); } static void fdfs_format_content_range(struct fdfs_http_context *pContext, const int64_t file_size, struct fdfs_http_response *pResponse) { int i; pResponse->content_range_count = pContext->range_count; for (i=0; i<pContext->range_count; i++) { fdfs_do_format_content_range(pContext->ranges + i, file_size, pResponse->content_ranges + i); } } static int64_t fdfs_calc_download_bytes(struct fdfs_http_context *pContext) { int64_t download_bytes; int i; download_bytes = 0; for (i=0; i<pContext->range_count; i++) { download_bytes += FDFS_RANGE_LENGTH(pContext->ranges[i]); } return download_bytes; } static int fdfs_calc_content_length(struct fdfs_http_context *pContext, const int64_t download_bytes, const int flv_header_len, const char *ext_name, const int ext_len, struct fdfs_http_response *pResponse) { int result; int i; int content_type_part_len; int boundary_part_len; pResponse->content_length = download_bytes + flv_header_len; if (pContext->if_range && pContext->range_count > 1) { pResponse->boundary_len = sprintf(pResponse->boundary, "%"PRIx64, get_current_time_us()); sprintf(pResponse->content_type_buff, "multipart/byteranges; boundary=%s", pResponse->boundary); pResponse->content_type = pResponse->content_type_buff; if ((result=fdfs_http_get_content_type_by_extname(&g_http_params, ext_name, ext_len, pResponse->range_content_type, sizeof(pResponse->range_content_type))) != 0) { return result; } content_type_part_len = FDFS_CONTENT_TYPE_TAG_LEN + strlen(pResponse->range_content_type) + 2; boundary_part_len = 4 + pResponse->boundary_len + 2; pResponse->content_length += (pContext->range_count + 1) * boundary_part_len; pResponse->content_length += pContext->range_count * content_type_part_len; for (i=0; i<pContext->range_count; i++) { pResponse->content_length += FDFS_CONTENT_RANGE_TAG_LEN + pResponse->content_ranges[i].length + 4; } pResponse->content_length += 2; //last -- } return 0; } static int fdfs_do_check_and_format_range(struct fdfs_http_range *range, const int64_t file_size) { if (range->start < 0) { int64_t start; start = range->start + file_size; if (start < 0) { logWarning("file: "__FILE__", line: %d, " \ "invalid range value: %"PRId64", set to 0", \ __LINE__, range->start); start = 0; } range->start = start; } else if (range->start >= file_size) { logError("file: "__FILE__", line: %d, " \ "invalid range start value: %"PRId64 \ ", exceeds file size: %"PRId64, \ __LINE__, range->start, file_size); return EINVAL; } if (range->end == 0) { range->end = file_size - 1; } else if (range->end >= file_size) { logWarning("file: "__FILE__", line: %d, " \ "invalid range end value: %"PRId64 \ ", exceeds file size: %"PRId64, \ __LINE__, range->end, file_size); range->end = file_size - 1; } if (range->start > range->end) { logError("file: "__FILE__", line: %d, " \ "invalid range value, start: %"PRId64 \ ", exceeds end: %"PRId64, \ __LINE__, range->start, range->end); return EINVAL; } return 0; } static int fdfs_check_and_format_range(struct fdfs_http_context *pContext, const int64_t file_size) { int result; int i; result = 0; for (i=0; i<pContext->range_count; i++) { if ((result=fdfs_do_check_and_format_range(pContext->ranges + i, file_size)) != 0) { return result; } } return 0; } #define FDFS_SET_LAST_MODIFIED(response, pContext, mtime) \ do { \ response.last_modified = mtime; \ fdfs_format_http_datetime(response.last_modified, \ response.last_modified_buff, \ sizeof(response.last_modified_buff)); \ if (*pContext->if_modified_since != '\0') \ { \ if (strcmp(response.last_modified_buff, \ pContext->if_modified_since) == 0) \ { \ OUTPUT_HEADERS(pContext, (&response), HTTP_NOTMODIFIED);\ return HTTP_NOTMODIFIED; \ } \ } \ \ /*\ logInfo("last_modified: %s, if_modified_since: %s, strcmp=%d", \ response.last_modified_buff, \ pContext->if_modified_since, \ strcmp(response.last_modified_buff, \ pContext->if_modified_since)); \ */ \ } while (0) static int fdfs_send_file_buffer(struct fdfs_http_context *pContext, const char *full_filename, int fd, const int64_t download_bytes, const bool bLast) { char file_trunk_buff[FDFS_OUTPUT_CHUNK_SIZE]; off_t remain_bytes; int read_bytes; int result; remain_bytes = download_bytes; while (remain_bytes > 0) { read_bytes = remain_bytes <= FDFS_OUTPUT_CHUNK_SIZE ? \ remain_bytes : FDFS_OUTPUT_CHUNK_SIZE; if (read(fd, file_trunk_buff, read_bytes) != read_bytes) { result = errno != 0 ? errno : EIO; logError("file: "__FILE__", line: %d, " \ "read from file %s fail, " \ "errno: %d, error info: %s", __LINE__, \ full_filename, result, STRERROR(result)); return result; } remain_bytes -= read_bytes; if ((result=pContext->send_reply_chunk(pContext->arg, (remain_bytes == 0 && bLast) ? 1: 0, file_trunk_buff, read_bytes)) != 0) { return result; } } return 0; } int fdfs_http_request_handler(struct fdfs_http_context *pContext) { #define HTTPD_MAX_PARAMS 32 char *file_id_without_group; char *url; char file_id[128]; char uri[512]; int url_len; int uri_len; int flv_header_len; int param_count; int ext_len; KeyValuePair params[HTTPD_MAX_PARAMS]; char *p; char *filename; const char *ext_name; FDFSStorePaths *pStorePaths; char true_filename[128]; char full_filename[MAX_PATH_SIZE + 64]; //char content_type[64]; struct stat file_stat; int64_t file_offset; int64_t file_size; int64_t download_bytes; int filename_len; int full_filename_len; int store_path_index; int fd; int result; int http_status; int the_storage_port; int i; struct fdfs_http_response response; FDFSFileInfo file_info; bool bFileExists; bool bSameGroup; //if in my group bool bTrunkFile; FDFSTrunkFullInfo trunkInfo; memset(&response, 0, sizeof(response)); response.status = HTTP_OK; //logInfo("url=%s", pContext->url); url_len = strlen(pContext->url); if (url_len < 16) { logError("file: "__FILE__", line: %d, " \ "url length: %d < 16", __LINE__, url_len); OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } if (strncasecmp(pContext->url, "http://", 7) == 0) { p = strchr(pContext->url + 7, '/'); if (p == NULL) { logError("file: "__FILE__", line: %d, " \ "invalid url: %s", __LINE__, pContext->url); OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } uri_len = url_len - (p - pContext->url); url = p; } else { uri_len = url_len; url = pContext->url; } if (uri_len + 1 >= (int)sizeof(uri)) { logError("file: "__FILE__", line: %d, " \ "uri length: %d is too long, >= %d", __LINE__, \ uri_len, (int)sizeof(uri)); OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } if (*url != '/') { *uri = '/'; memcpy(uri+1, url, uri_len+1); uri_len++; } else { memcpy(uri, url, uri_len+1); } the_storage_port = storage_server_port; param_count = http_parse_query(uri, params, HTTPD_MAX_PARAMS); if (url_have_group_name) { int group_name_len; snprintf(file_id, sizeof(file_id), "%s", uri + 1); file_id_without_group = strchr(file_id, '/'); if (file_id_without_group == NULL) { logError("file: "__FILE__", line: %d, " \ "no group name in url, uri: %s", __LINE__, uri); OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } pStorePaths = &g_fdfs_store_paths; group_name_len = file_id_without_group - file_id; if (group_count == 0) { bSameGroup = (group_name_len == my_group_name_len) && \ (memcmp(file_id, my_group_name, \ group_name_len) == 0); } else { int i; bSameGroup = false; for (i=0; i<group_count; i++) { if (group_store_paths[i].group_name_len == \ group_name_len && memcmp(file_id, \ group_store_paths[i].group_name, \ group_name_len) == 0) { the_storage_port = group_store_paths[i]. \ storage_server_port; bSameGroup = true; pStorePaths = &group_store_paths[i].store_paths; break; } } } file_id_without_group++; //skip / } else { pStorePaths = &g_fdfs_store_paths; bSameGroup = true; file_id_without_group = uri + 1; //skip / snprintf(file_id, sizeof(file_id), "%s/%s", \ my_group_name, file_id_without_group); } if (strlen(file_id_without_group) < 22) { logError("file: "__FILE__", line: %d, " \ "file id is too short, length: %d < 22, " \ "uri: %s", __LINE__, \ (int)strlen(file_id_without_group), uri); OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } if (g_http_params.anti_steal_token) { char *token; char *ts; int timestamp; token = fdfs_http_get_parameter("token", params, param_count); ts = fdfs_http_get_parameter("ts", params, param_count); if (token == NULL || ts == NULL) { logError("file: "__FILE__", line: %d, " \ "expect parameter token or ts in url, " \ "uri: %s", __LINE__, uri); OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } timestamp = atoi(ts); if ((result=fdfs_http_check_token( \ &g_http_params.anti_steal_secret_key, \ file_id_without_group, timestamp, token, \ g_http_params.token_ttl)) != 0) { logError("file: "__FILE__", line: %d, " \ "check token fail, uri: %s, " \ "errno: %d, error info: %s", \ __LINE__, uri, result, STRERROR(result)); if (*(g_http_params.token_check_fail_content_type)) { response.content_length = g_http_params. \ token_check_fail_buff.length; response.content_type = g_http_params. \ token_check_fail_content_type; OUTPUT_HEADERS(pContext, (&response), HTTP_OK); pContext->send_reply_chunk(pContext->arg, 1, \ g_http_params.token_check_fail_buff.buff, g_http_params.token_check_fail_buff.length); return HTTP_OK; } else { OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } } } filename = file_id_without_group; filename_len = strlen(filename); //logInfo("filename=%s", filename); if (storage_split_filename_no_check(filename, \ &filename_len, true_filename, &store_path_index) != 0) { OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } if (bSameGroup) { if (store_path_index < 0 || \ store_path_index >= pStorePaths->count) { logError("file: "__FILE__", line: %d, " \ "filename: %s is invalid, " \ "invalid store path index: %d, " \ "which < 0 or >= %d", __LINE__, filename, \ store_path_index, pStorePaths->count); OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } } if (fdfs_check_data_filename(true_filename, filename_len) != 0) { OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } if ((result=fdfs_get_file_info_ex1(file_id, false, &file_info)) != 0) { if (result == ENOENT) { http_status = HTTP_NOTFOUND; } else if (result == EINVAL) { http_status = HTTP_BADREQUEST; } else { http_status = HTTP_INTERNAL_SERVER_ERROR; } OUTPUT_HEADERS(pContext, (&response), http_status); return http_status; } if (file_info.file_size >= 0) //normal file { FDFS_SET_LAST_MODIFIED(response, pContext, \ file_info.create_timestamp); } fd = -1; memset(&file_stat, 0, sizeof(file_stat)); if (bSameGroup) { FDFSTrunkHeader trunkHeader; if ((result=trunk_file_stat_ex1(pStorePaths, store_path_index, \ true_filename, filename_len, &file_stat, \ &trunkInfo, &trunkHeader, &fd)) != 0) { bFileExists = false; } else { bFileExists = true; } } else { bFileExists = false; memset(&trunkInfo, 0, sizeof(trunkInfo)); } response.attachment_filename = fdfs_http_get_parameter("filename", \ params, param_count); if (bFileExists) { if (file_info.file_size < 0) //slave or appender file { FDFS_SET_LAST_MODIFIED(response, pContext, \ file_stat.st_mtime); } } else { char *redirect; //logInfo("source id: %d", file_info.source_id); //logInfo("source ip addr: %s", file_info.source_ip_addr); //logInfo("create_timestamp: %d", file_info.create_timestamp); if (bSameGroup && (is_local_host_ip(file_info.source_ip_addr) \ || (file_info.create_timestamp > 0 && (time(NULL) - \ file_info.create_timestamp > storage_sync_file_max_delay)))) { if (IS_TRUNK_FILE_BY_ID(trunkInfo)) { if (result == ENOENT) { logError("file: "__FILE__", line: %d, "\ "logic file: %s not exist", \ __LINE__, filename); } else { logError("file: "__FILE__", line: %d, "\ "stat logic file: %s fail, " \ "errno: %d, error info: %s", \ __LINE__, filename, result, \ STRERROR(result)); } } else { snprintf(full_filename, \ sizeof(full_filename), "%s/data/%s", \ pStorePaths->paths[store_path_index].path, \ true_filename); if (result == ENOENT) { logError("file: "__FILE__", line: %d, "\ "file: %s not exist", \ __LINE__, full_filename); } else { logError("file: "__FILE__", line: %d, "\ "stat file: %s fail, " \ "errno: %d, error info: %s", \ __LINE__, full_filename, \ result, STRERROR(result)); } } OUTPUT_HEADERS(pContext, (&response), HTTP_NOTFOUND); return HTTP_NOTFOUND; } redirect = fdfs_http_get_parameter("redirect", \ params, param_count); if (redirect != NULL) { logWarning("file: "__FILE__", line: %d, " \ "redirect again, url: %s", \ __LINE__, url); OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } if (*(file_info.source_ip_addr) == '\0') { logWarning("file: "__FILE__", line: %d, " \ "can't get ip address of source storage " \ "id: %d, url: %s", __LINE__, \ file_info.source_id, url); OUTPUT_HEADERS(pContext, (&response), HTTP_INTERNAL_SERVER_ERROR); return HTTP_INTERNAL_SERVER_ERROR; } if (response_mode == FDFS_MOD_REPONSE_MODE_REDIRECT) { char *path_split_str; char port_part[16]; char param_split_char; if (pContext->server_port == 80) { *port_part = '\0'; } else { sprintf(port_part, ":%d", pContext->server_port); } if (param_count == 0) { param_split_char = '?'; } else { param_split_char = '&'; } if (*url != '/') { path_split_str = "/"; } else { path_split_str = ""; } response.redirect_url_len = snprintf( \ response.redirect_url, \ sizeof(response.redirect_url), \ "http://%s%s%s%s%c%s", \ file_info.source_ip_addr, port_part, \ path_split_str, url, \ param_split_char, "redirect=1"); logDebug("file: "__FILE__", line: %d, " \ "redirect to %s", \ __LINE__, response.redirect_url); if (pContext->if_range) { fdfs_format_range(pContext, &response); } OUTPUT_HEADERS(pContext, (&response), HTTP_MOVETEMP); return HTTP_MOVETEMP; } else if (pContext->proxy_handler != NULL) { return pContext->proxy_handler(pContext->arg, \ file_info.source_ip_addr); } } ext_name = fdfs_http_get_file_extension(true_filename, \ filename_len, &ext_len); /* if (g_http_params.need_find_content_type) { if (fdfs_http_get_content_type_by_extname(&g_http_params, \ ext_name, ext_len, content_type, sizeof(content_type)) != 0) { if (fd >= 0) { close(fd); } OUTPUT_HEADERS(pContext, (&response), HTTP_SERVUNAVAIL); return HTTP_SERVUNAVAIL; } response.content_type = content_type; } */ if (bFileExists) { file_size = file_stat.st_size; } else { bool if_get_file_info; if_get_file_info = pContext->header_only || \ (pContext->if_range && file_info.file_size < 0); if (if_get_file_info) { if ((result=fdfs_get_file_info_ex1(file_id, true, \ &file_info)) != 0) { if (result == ENOENT) { http_status = HTTP_NOTFOUND; } else { http_status = HTTP_INTERNAL_SERVER_ERROR; } OUTPUT_HEADERS(pContext, (&response), http_status); return http_status; } } file_size = file_info.file_size; } flv_header_len = 0; if (pContext->if_range) { if (fdfs_check_and_format_range(pContext, file_size) != 0 || (pContext->range_count > 1 && !g_http_params.support_multi_range)) { if (fd >= 0) { close(fd); } OUTPUT_HEADERS(pContext, (&response), HTTP_RANGE_NOT_SATISFIABLE); return HTTP_RANGE_NOT_SATISFIABLE; } if (pContext->range_count == 1) { download_bytes = FDFS_RANGE_LENGTH(pContext->ranges[0]); } else { download_bytes = fdfs_calc_download_bytes(pContext); } fdfs_format_content_range(pContext, file_size, &response); } else { download_bytes = file_size > 0 ? file_size : 0; //flv support if (flv_support && (flv_ext_len == ext_len && \ memcmp(ext_name, flv_extension, ext_len) == 0)) { char *pStart; pStart = fdfs_http_get_parameter("start", \ params, param_count); if (pStart != NULL) { int64_t start; if (fdfs_strtoll(pStart, &start) == 0) { char *pEnd; pContext->range_count = 1; pContext->ranges[0].start = start; pContext->ranges[0].end = 0; pEnd = fdfs_http_get_parameter("end", \ params, param_count); if (pEnd != NULL) { int64_t end; if (fdfs_strtoll(pEnd, &end) == 0) { pContext->ranges[0].end = end - 1; } } if (fdfs_check_and_format_range(pContext, file_size) != 0) { if (fd >= 0) { close(fd); } OUTPUT_HEADERS(pContext, (&response), HTTP_BADREQUEST); return HTTP_BADREQUEST; } download_bytes = FDFS_RANGE_LENGTH(pContext->ranges[0]); if (start > 0) { flv_header_len = sizeof(flv_header) - 1; } } } } } //logInfo("flv_header_len: %d", flv_header_len); if (pContext->header_only) { if (fd >= 0) { close(fd); } if (fdfs_calc_content_length(pContext, download_bytes, flv_header_len, ext_name, ext_len, &response) != 0) { OUTPUT_HEADERS(pContext, (&response), HTTP_SERVUNAVAIL); return HTTP_SERVUNAVAIL; } OUTPUT_HEADERS(pContext, (&response), pContext->if_range ? \ HTTP_PARTIAL_CONTENT : HTTP_OK ); return HTTP_OK; } if (fdfs_calc_content_length(pContext, download_bytes, flv_header_len, ext_name, ext_len, &response) != 0) { OUTPUT_HEADERS(pContext, (&response), HTTP_SERVUNAVAIL); return HTTP_SERVUNAVAIL; } if (!bFileExists) { ConnectionInfo storage_server; struct fdfs_download_callback_args callback_args; int64_t file_size; strcpy(storage_server.ip_addr, file_info.source_ip_addr); storage_server.port = the_storage_port; storage_server.sock = -1; callback_args.pContext = pContext; callback_args.pResponse = &response; callback_args.sent_bytes = 0; callback_args.range_index = 0; if (pContext->if_range) { download_bytes = FDFS_RANGE_LENGTH(pContext->ranges[0]); } result = storage_download_file_ex1(NULL, \ &storage_server, file_id, \ pContext->ranges[0].start, download_bytes, \ fdfs_download_callback, &callback_args, &file_size); logDebug("file: "__FILE__", line: %d, " \ "storage_download_file_ex1 return code: %d, " \ "file id: %s", __LINE__, result, file_id); if (result == 0) { http_status = HTTP_OK; } if (result == ENOENT) { http_status = HTTP_NOTFOUND; } else { http_status = HTTP_INTERNAL_SERVER_ERROR; } OUTPUT_HEADERS(pContext, (&response), http_status); if (result != 0 || !(pContext->if_range && pContext->range_count > 1)) { return http_status; } for (i=1; i<pContext->range_count; i++) { callback_args.sent_bytes = 0; callback_args.range_index = i; download_bytes = FDFS_RANGE_LENGTH(pContext->ranges[i]); result = storage_download_file_ex1(NULL, &storage_server, file_id, pContext->ranges[i].start, download_bytes, fdfs_download_callback, &callback_args, &file_size); if (result != 0) { return HTTP_INTERNAL_SERVER_ERROR; } } if (fdfs_send_boundary(pContext, &response, true) != 0) { return HTTP_INTERNAL_SERVER_ERROR; } return http_status; } bTrunkFile = IS_TRUNK_FILE_BY_ID(trunkInfo); if (bTrunkFile) { trunk_get_full_filename_ex(pStorePaths, &trunkInfo, \ full_filename, sizeof(full_filename)); full_filename_len = strlen(full_filename); file_offset = TRUNK_FILE_START_OFFSET(trunkInfo) + \ pContext->ranges[0].start; } else { full_filename_len = snprintf(full_filename, \ sizeof(full_filename), "%s/data/%s", \ pStorePaths->paths[store_path_index].path, \ true_filename); file_offset = pContext->ranges[0].start; } if (pContext->send_file != NULL && !bTrunkFile && !(pContext->if_range && pContext->range_count > 1)) { http_status = pContext->if_range ? \ HTTP_PARTIAL_CONTENT : HTTP_OK; OUTPUT_HEADERS(pContext, (&response), http_status); if (flv_header_len > 0) { if (pContext->send_reply_chunk(pContext->arg, \ false, flv_header, flv_header_len) != 0) { close(fd); return HTTP_INTERNAL_SERVER_ERROR; } } return pContext->send_file(pContext->arg, full_filename, \ full_filename_len, file_offset, download_bytes); } if (fd < 0) { fd = open(full_filename, O_RDONLY); if (fd < 0) { logError("file: "__FILE__", line: %d, " \ "open file %s fail, " \ "errno: %d, error info: %s", __LINE__, \ full_filename, errno, STRERROR(errno)); OUTPUT_HEADERS(pContext, (&response), \ HTTP_SERVUNAVAIL); return HTTP_SERVUNAVAIL; } if (file_offset > 0 && lseek(fd, file_offset, SEEK_SET) < 0) { close(fd); logError("file: "__FILE__", line: %d, " \ "lseek file: %s fail, " \ "errno: %d, error info: %s", \ __LINE__, full_filename, \ errno, STRERROR(errno)); OUTPUT_HEADERS(pContext, (&response), HTTP_INTERNAL_SERVER_ERROR); return HTTP_INTERNAL_SERVER_ERROR; } } else { if (pContext->ranges[0].start > 0 && \ lseek(fd, pContext->ranges[0].start, SEEK_CUR) < 0) { close(fd); logError("file: "__FILE__", line: %d, " \ "lseek file: %s fail, " \ "errno: %d, error info: %s", \ __LINE__, full_filename, \ errno, STRERROR(errno)); OUTPUT_HEADERS(pContext, (&response), HTTP_INTERNAL_SERVER_ERROR); return HTTP_INTERNAL_SERVER_ERROR; } } OUTPUT_HEADERS(pContext, (&response), pContext->if_range ? \ HTTP_PARTIAL_CONTENT : HTTP_OK); if (pContext->if_range && pContext->range_count > 1) { if (fdfs_send_boundary(pContext, &response, false) != 0) { close(fd); return HTTP_INTERNAL_SERVER_ERROR; } if (fdfs_send_range_subheader(pContext, &response, 0) != 0) { close(fd); return HTTP_INTERNAL_SERVER_ERROR; } } if (flv_header_len > 0) { if (pContext->send_reply_chunk(pContext->arg, \ false, flv_header, flv_header_len) != 0) { close(fd); return HTTP_INTERNAL_SERVER_ERROR; } } if (pContext->if_range) { download_bytes = FDFS_RANGE_LENGTH(pContext->ranges[0]); } if (fdfs_send_file_buffer(pContext, full_filename, fd, download_bytes, !(pContext->if_range && pContext->range_count > 1)) != 0) { close(fd); return HTTP_INTERNAL_SERVER_ERROR; } if (!(pContext->if_range && pContext->range_count > 1)) { close(fd); return HTTP_OK; } for (i=1; i<pContext->range_count; i++) { if (bTrunkFile) { file_offset = TRUNK_FILE_START_OFFSET(trunkInfo) + pContext->ranges[i].start; } else { file_offset = pContext->ranges[i].start; } if (lseek(fd, file_offset, SEEK_SET) < 0) { close(fd); logError("file: "__FILE__", line: %d, " \ "lseek file: %s fail, " \ "errno: %d, error info: %s", \ __LINE__, full_filename, \ errno, STRERROR(errno)); return HTTP_INTERNAL_SERVER_ERROR; } if (fdfs_send_boundary(pContext, &response, false) != 0) { close(fd); return HTTP_INTERNAL_SERVER_ERROR; } if (fdfs_send_range_subheader(pContext, &response, i) != 0) { close(fd); return HTTP_INTERNAL_SERVER_ERROR; } if (fdfs_send_file_buffer(pContext, full_filename, fd, FDFS_RANGE_LENGTH(pContext->ranges[i]), false) != 0) { close(fd); return HTTP_INTERNAL_SERVER_ERROR; } } close(fd); if (fdfs_send_boundary(pContext, &response, true) != 0) { return HTTP_INTERNAL_SERVER_ERROR; } return HTTP_OK; } static int fdfs_get_params_from_tracker() { IniContext iniContext; int result; bool continue_flag; continue_flag = false; if ((result=fdfs_get_ini_context_from_tracker(&g_tracker_group, &iniContext, &continue_flag,NULL)) != 0) { return result; } storage_sync_file_max_delay = iniGetIntValue(NULL, \ "storage_sync_file_max_delay", \ &iniContext, 24 * 3600); use_storage_id = iniGetBoolValue(NULL, "use_storage_id", \ &iniContext, false); iniFreeContext(&iniContext); if (use_storage_id) { result = fdfs_get_storage_ids_from_tracker_group( \ &g_tracker_group); } return result; } static int fdfs_format_http_datetime(time_t t, char *buff, const int buff_size) { struct tm tm; struct tm *ptm; *buff = '\0'; if ((ptm=gmtime_r(&t, &tm)) == NULL) { return errno != 0 ? errno : EFAULT; } strftime(buff, buff_size, "%a, %d %b %Y %H:%M:%S GMT", ptm); return 0; } static int fdfs_parse_range(char *value, struct fdfs_http_range *range) { int result; char *pEndPos; if (*value == '-') { if ((result=fdfs_strtoll(value, &(range->start))) != 0) { return result; } range->end = 0; return 0; } pEndPos = strchr(value, '-'); if (pEndPos == NULL) { return EINVAL; } *pEndPos = '\0'; if ((result=fdfs_strtoll(value, &(range->start))) != 0) { return result; } pEndPos++; //skip - if (*pEndPos == '\0') { range->end = 0; } else { if ((result=fdfs_strtoll(pEndPos, &(range->end))) != 0) { return result; } } return 0; } int fdfs_parse_ranges(const char *value, struct fdfs_http_context *pContext) { /* range format: bytes=500-999 bytes=-500 bytes=9500- */ #define RANGE_PREFIX_STR "bytes=" #define RANGE_PREFIX_LEN (int)(sizeof(RANGE_PREFIX_STR) - 1) int result; int len; int i; const char *p; char buff[256]; char *parts[FDFS_MAX_HTTP_RANGES]; len = strlen(value); if (len <= RANGE_PREFIX_LEN + 1) { return EINVAL; } p = value + RANGE_PREFIX_LEN; len -= RANGE_PREFIX_LEN; if (len >= (int)sizeof(buff)) { return EINVAL; } memcpy(buff, p, len); *(buff + len) = '\0'; result = 0; pContext->range_count = splitEx(buff, ',', parts, FDFS_MAX_HTTP_RANGES); for (i=0; i<pContext->range_count; i++) { if ((result=fdfs_parse_range(parts[i], pContext->ranges + i)) != 0) { break; } } return result; } commic内容为以上还是报错In file included from /usr/local/fastdfs-nginx-module/src/ngx_http_fastdfs_module.c:6: /usr/local/fastdfs-nginx-module/src/common.c: In function ‘fdfs_mod_init’: /usr/local/fastdfs-nginx-module/src/common.c:230:9: error:FDFS_CONNECT_TIMEOUT’ undeclared (first use in this function); did you mean ‘DEFAULT_CONNECT_TIMEOUT’? 230 | FDFS_CONNECT_TIMEOUT = iniGetIntValue(NULL, "connect_timeout", \ | ^~~~~~~~~~~~~~~~~~~~ | DEFAULT_CONNECT_TIMEOUT /usr/local/fastdfs-nginx-module/src/common.c:230:9: note: each undeclared identifier is reported only once for each function it appears in /usr/local/fastdfs-nginx-module/src/common.c:237:9: error:FDFS_NETWORK_TIMEOUT’ undeclared (first use in this function); did you mean ‘DEFAULT_NETWORK_TIMEOUT’? 237 | FDFS_NETWORK_TIMEOUT = iniGetIntValue(NULL, "network_timeout", \ | ^~~~~~~~~~~~~~~~~~~~ | DEFAULT_NETWORK_TIMEOUT /usr/local/fastdfs-nginx-module/src/common.c:376:17: error:FDFS_BASE_PATH_STR’ undeclared (first use in this function) 376 | FDFS_BASE_PATH_STR, url_have_group_name, buff, | ^~~~~~~~~~~~~~~~~~ /usr/local/fastdfs-nginx-module/src/common.c: In function ‘fdfs_get_params_from_tracker’: /usr/local/fastdfs-nginx-module/src/common.c:1601:21: error: too few arguments to function ‘fdfs_get_ini_context_from_tracker’ 1601 | if ((result=fdfs_get_ini_context_from_tracker(&g_tracker_group, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /usr/include/fastdfs/fdfs_client.h:14, from /usr/local/fastdfs-nginx-module/src/common.c:29: /usr/include/fastdfs/tracker_proto.h:323:5: note: declared here 323 | int fdfs_get_ini_context_from_tracker(TrackerServerGroup *pTrackerGroup, \ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ cc -c -I/usr/local/include/luajit-2.1 -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -Wno-error -DNDK_SET_VAR -Wformat-truncation=0 -Wformat-overflow=0 -D_FILE_OFFSET_BITS=64 -DFDFS_OUTPUT_CHUNK_SIZE='256*1024' -DFDFS_MOD_CONF_FILENAME='"/etc/fdfs/mod_fastdfs.conf"' -I src/core -I src/event -I src/event/modules -I src/os/unix -I /www/server/nginx/src/ngx_devel_kit/objs -I objs/addon/ndk -I /www/server/nginx/src/lua_nginx_module/src/api -I /www/server/nginx/src/pcre-8.43 -I /www/server/nginx/src/openssl/.openssl/include -I /usr/include/libxml2 -I objs \ -o objs/ngx_modules.o \ objs/ngx_modules.c make[1]: *** [objs/Makefile:2254: objs/addon/src/ngx_http_fastdfs_module.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make[1]: Leaving directory '/www/server/nginx/src/nginx-1.24.0' make: *** [Makefile:10: build] Error 2
最新发布
06-19
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值