Ceph-0.9.0 源码安装

Ceph-0.9.0 源码安装


环境:
rhl6.4 64bit

192.168.9.250  mds&monitor
192.168.9.237  osd 
192.168.9.238  osd

官网下载源码包
http://download.ceph.com/tarballs/
最好不要去github下载,不然会有报错,下面会有说到

缺少的包下载地址:
1. http://rpm.pbone.net/
2. http://www.rpmfind.net/
3. http://apt.sw.be/redhat/
第一个比较好用


说明:参考了网上的一个安装文档,缺啥包之类的基本一样,按照操作系统下载即可。
实验安装用了ceph-0.68.tar.gz   ceph-9.1.0.tar.gz  安装都没啥问题。主要初始化的时候有点不一样。


1.安装所需要的包
yum install libtool gcc-c++ libuuid-devel keyutils-libs-devel fuse-devel libedit-devel libatomic_ops-devel libaio-devel boost-devel expat-devel

2.编译ceph源码
#tar -xvf ceph-9.0.0.tar.gz
#cd ceph-9.0.0
#./autogen.sh
#./configure --without-tcmalloc

若提示以下错误,说明缺少相应依赖包,安装即可:
***************************************************************************************************
checking whether -lpthread saves the day... yes
checking for uuid_parse in -luuid... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libuuid not found
See `config.log' for more details.
安装:
#yum install libuuid-devel


***************************************************************************************************
checking for __res_nquery in -lresolv... yes
checking for add_key in -lkeyutils... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libkeyutils not found
See `config.log' for more details.
安装:
#yum install keyutils-libs-devel


***************************************************************************************************
checking pkg-config is at least version 0.9.0... yes
checking for CRYPTOPP... no
checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE... no
checking for NSS... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no suitable crypto library found
See `config.log' for more details.
安装(下载的rpm包):
#rpm -ivh cryptopp-5.6.2-2.el6.x86_64.rpm
#rpm -ivh cryptopp-devel-5.6.2-2.el6.x86_64.rpm


***************************************************************************************************
checking pkg-config is at least version 0.9.0... yes
checking for CRYPTOPP... no
checking for library containing _ZTIN8CryptoPP14CBC_EncryptionE...-lcryptopp
checking for NSS... no
configure: using cryptopp for cryptography
checking for FCGX_Init in -lfcgi... no
checking for fuse_main in -lfuse... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no FUSE found (use --without-fuse to disable)
See `config.log' for more details.
安装:
#yum install fuse-devel


***************************************************************************************************
checking for fuse_main in -lfuse... yes
checking for fuse_getgroups... no
checking jni.h usability... no
checking jni.h presence... no
checking for jni.h... no
checking for LIBEDIT... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: No usable version of libedit found.
See `config.log' for more details.
安装:
#yum install libedit-devel


***************************************************************************************************
checking for LIBEDIT... yes
checking atomic_ops.h usability... no
checking atomic_ops.h presence... no
checking for atomic_ops.h... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: no libatomic-ops found (use --without-libatomic-ops to disable)
See `config.log' for more details.
安装:
#yum install libatomic_ops-devel 
(也可按提示,使用#./configure --without-tcmalloc --without-libatomic-ops命令屏蔽掉libatomic-ops)


***************************************************************************************************
checking for LIBEDIT... yes
checking for snappy_compress in -lsnappy... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libsnappy not found
See `config.log' for more details.
安装(下载的rpm包):
#rpm -ivh snappy-1.0.5-1.el6.x86_64.rpm
#rpm -ivh snappy-devel-1.0.5-1.el6.x86_64.rpm


***************************************************************************************************
checking for snappy_compress in -lsnappy... yes
checking for leveldb_open in -lleveldb... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libleveldb not found
See `config.log' for more details.
安装(下载的rpm包):
#rpm -ivh leveldb-1.7.0-2.el6.x86_64.rpm
#rpm -ivh leveldb-devel-1.7.0-2.el6.x86_64.rpm


***************************************************************************************************
checking for leveldb_open in -lleveldb... yes
checking for io_submit in -laio... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: libaio not found
See `config.log' for more details.
安装:
#yum install libaio-devel


***************************************************************************************************
checking for sys/wait.h that is POSIX.1 compatible... yes
checking boost/spirit/include/classic_core.hpp usability... no
checking boost/spirit/include/classic_core.hpp presence... no
checking for boost/spirit/include/classic_core.hpp... no
checking boost/spirit.hpp usability... no
checking boost/spirit.hpp presence... no
checking for boost/spirit.hpp... no
configure: error:in`/cwn/ceph/ceph-0.60':
configure: error: "Can't find boost spirit headers"
See `config.log' for more details.
安装:
#yum install boost-devel


***************************************************************************************************
checking if more special flags are requiredfor pthreads... no
checking whether to check for GCC pthread/shared inconsistencies... yes
checking whether -pthread is sufficient with -shared... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating scripts/gtest-config
config.status: creating build-aux/config.h
config.status: executing depfiles commands
config.status: executing libtool commands
见上说明
#./configure --without-tcmalloc命令执行成功,会生成Makefile文件,接下来正式编译:


***************************************************************************************************
#make
若过程中报以下错误,说明expat-devel没安装:
CXX osdmaptool.o
CXXLD osdmaptool
CXX ceph_dencoder-ceph_dencoder.o
test/encoding/ceph_dencoder.cc: In function'int main(int, const char**)':
test/encoding/ceph_dencoder.cc:196: note: variable tracking size limit exceeded with-fvar-tracking-assignments, retrying without
CXX ceph_dencoder-rgw_dencoder.o
In file included from rgw/rgw_dencoder.cc:6:
rgw/rgw_acl_s3.h:9:19: error: expat.h: No such file or directory
In file included from rgw/rgw_acl_s3.h:12,
from rgw/rgw_dencoder.cc:6:
rgw/rgw_xml.h:62: error: 'XML_Parser' does not name a type
make[3]:*** [ceph_dencoder-rgw_dencoder.o] Error1
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: *** [all] Error2
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make: *** [all-recursive] Error 1
安装:
#yum install expat-devel


***************************************************************************************************
/usr/bin/ld: cannot find -ledit
collect2: ld returned 1 exit status
make[3]: *** [libcephfs.la] Error 1
make[3]: Leaving directory `/root/ceph-9.0.0/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/root/ceph-9.0.0/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/root/ceph-9.0.0/src'
make: *** [all-recursive] Error 1


解决/usr/bin/ld: cannot find -lxxx 问题
问题:
在linux环境编译应用程式或lib的source code时常常会出现如下的错误讯息:
/usr/bin/ld: cannot find -lxxx
这些讯息会随着编译不同类型的source code 而有不同的结果出来如:
/usr/bin/ld: cannot find -lc
/usr/bin/ld: cannot find -lltdl
/usr/bin/ld: cannot find -lXtst
其中xxx即表示函式库文件名称,如上例的:libc.so、libltdl.so、libXtst.so。
其命名规则是:lib+库名(即xxx)+.so。
会发生这样的原因有以下三种情形:
1 系统没有安装相对应的lib
2 相对应的lib版本不对
3 lib(.so档)的symbolic link 不正确,没有连结到正确的函式库文件(.so)
解决方法:
(1)先判断在/usr/lib 下的相对应的函式库文件(.so) 的symbolic link 是否正确,若不正确改成正确的连结目标即可解决问题。
(2)若不是symbolic link 的问题引起,而是系统缺少相对应的lib安装lib即可解决。
(3)如何安装缺少的lib:
以上面三个错误讯息为例:
错误1缺少libc的LIB
错误2缺少libltdl的LIB
错误3缺少libXtst的LIB
先搜寻相对应的LIB再进行安装的作业如:
apt-cache search libc-dev
apt-cache search libltdl-dev
apt-cache search libXtst-dev
实例:
在进行输入法gcin的Source Code的编译时出现以下的错误讯息:
/usr/bin/ld: cannot find -lXtst
经检查后发现是:
lib(.so档)的symbolic link 不正确
解决方法如下:
cd /usr/lib
ln -s libXtst.so.6 libXtst.so
[root@tomcat ceph-9.0.0]# ln -s /usr/lib64/libedit.so.0.0.27 /usr/lib/libedit.so
如果在/usr/lib的目录下找不到libXtst.so 档,那么就表示系统没有安装libXtst的函式库。
解法如下:
apt-get install libxtst-dev
如果是库文件路径引发的问题,可以到/etc/ld.so.conf.d目录下,修改其中任意一份conf文件,
(可以自建conf,以方便识别)将lib所在目录写进去,然后在终端输入 ldconfig 更新缓存。




***************************************************************************************************
No rule to make target `erasure-code/jerasure/jerasure/src/cauchy.c
如果直接使用ceph 的src.rpm包,编译成功,没有任何错误。
但是,如果从github上取ceph的源码编译,则老实遍不成功,报如下错误:
No rule to make target `erasure-code/jerasure/jerasure/src/cauchy.c'
网上搜了一把,也有其他人遇到这样的问题,但都没有给出解决方案。
仔细研究了一下,错误原因原来是这样的:
ceph 在github上,还有好多的submodules, 如:
src/erasure-code/jerasure/gf-complete
src/erasure-code/jerasure/jerasure
src/libs3
src/rocksdb
git clone https://github.com/ceph/ceph.git 是不会取这下submodule的代码的。
而上面编译ceph遇到的错误就是:编译过程中用到了erasure-code/jerasure/jerasure/src/cauchy.c,
由于没有git submodule, 所以找不到这个文件而报错。


解决方法:
把submodule的代码也取下来
git submodule update --init --recursive
至于ceph.src.rpm为什么能编译成功,那是因为,所需要的submodule的代码已经一起打包,包含在src.rpm里面了。


***************************************************************************************************
编译libmongoclient出错解决办法
centos g++ 4.4.6 boost 1.41编译libmongoclient出错
In file included from /usr/include/boost/thread/future.hpp:12,
                 from /usr/include/boost/thread.hpp:24,
                 from src/mongo/util/file_allocator.cpp:22:
/usr/include/boost/exception_ptr.hpp:43: error: looser throw specifier for 'virtual boost::exception_ptr::~exception_ptr()'
/usr/include/boost/exception/detail/exception_ptr_base.hpp:26: error:   overriding 'virtual boost::exception_detail::exception_ptr_base::~exception_ptr_base() throw ()'
scons: *** [build/mongo/util/file_allocator.o] Error 1
scons: building terminated because of errors.
在/usr/include/boost/exception_ptr.hpp中按如下修改即可。
增加一行:~exception_ptr() throw() { }
include/boost/exception_ptr.hpp
OLD NEW  
90 90            {
91 91            }
92 92  
  93        ~exception_ptr() throw() { }
  94  
93 95        operator unspecified_bool_type() const
94 96            {
95 97            return _empty() ? 0 : &exception_ptr::bad_alloc_;




***************************************************************************************************
CXXLD ceph-dencoder
CXXLD cephfs
CXXLD librados-config
CXXLD ceph-fuse
CCLD rbd-fuse
CCLD mount.ceph
CXXLD rbd
CXXLD rados
CXXLD ceph-syn
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
Making all in man
make[1]: Entering directory `/cwn/ceph/ceph-0.60/man'
make[1]: Nothing to bedonefor`all'.
make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man'
见上即编译成功,再安装ceph即可:




***************************************************************************************************
#make install
libtool: install: ranlib/usr/local/lib/rados-classes/libcls_kvs.a
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig-n/usr/local/lib/rados-classes
----------------------------------------------------------------------
Libraries have been installed in:
/usr/local/lib/rados-classes


If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool,and
specify the full pathname of the library, or use the`-LLIBDIR'
flag during linking and do at least one of the following:
- add LIBDIR to the `LD_LIBRARY_PATH' environment variable
during execution
- add LIBDIR to the `LD_RUN_PATH' environment variable
during linking
- use the `-Wl,-rpath -Wl,LIBDIR' linker flag
- have your system administrator add LIBDIR to`/etc/ld.so.conf'


See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
test -z "/usr/local/lib/ceph" || /bin/mkdir -p "/usr/local/lib/ceph"
/usr/bin/install -c ceph_common.sh '/usr/local/lib/ceph'
make[4]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[3]: Leaving directory`/cwn/ceph/ceph-0.60/src'
make[2]: Leaving directory `/cwn/ceph/ceph-0.60/src'
make[1]: Leaving directory`/cwn/ceph/ceph-0.60/src'
Making install in man
make[1]: Entering directory `/cwn/ceph/ceph-0.60/man'
make[2]: Entering directory`/cwn/ceph/ceph-0.60/man'
make[2]: Nothing to be done for `install-exec-am'.
test -z "/usr/local/share/man/man8" || /bin/mkdir-p"/usr/local/share/man/man8"
/usr/bin/install-c-m644 ceph-osd.8 ceph-mds.8 ceph-mon.8 mkcephfs.8 ceph-fuse.8 ceph-syn.8 crushtool.8 osdmaptool.8 monmaptool.8 ceph-conf.8 ceph-run.8 ceph.8 mount.ceph.8 radosgw.8 radosgw-admin.8 ceph-authtool.8 rados.8 librados-config.8 rbd.8 ceph-clsinfo.8 ceph-debugpack.8 cephfs.8 ceph-dencoder.8 ceph-rbdnamer.8 rbd-fuse.8'/usr/local/share/man/man8'
make[2]: Leaving directory`/cwn/ceph/ceph-0.60/man'
make[1]: Leaving directory `/cwn/ceph/ceph-0.60/man'
到此,ceph的编译安装全部成功。




***************************************************************************************************
3. 配置ceph
除客户端外,其它的节点都需一个配置文件ceph.conf,并需要是完全一样的。这个文件要位于/etc/ceph下面,
如果在./configure时没有修改prefix的话,则应该是在/usr/local/etc/ceph下。
#cp ./src/sample.* /usr/local/etc/ceph/
#mv /usr/local/etc/ceph/sample.ceph.conf /usr/local/etc/ceph/ceph.conf
#mv /usr/local/etc/ceph/sample.fetch_config /usr/local/etc/ceph/fetch_config
#cp ./src/init-ceph /etc/init.d/ceph
#mkdir /var/log/ceph //存放log,现在ceph自己还不自动建这个目录
注:
①部署每台服务器,主要修改的就是/usr/local/etc/ceph/下的两个文件ceph.conf(ceph集群配置文件)和
fetch_config(同步脚本,用于同步各节点的ceph.conf文件,具体方法是scp远程拷贝,但我发现没啥用,所以后来自行写了个脚本)。
②针对osd,除了要加载btrfs模块,还要安装btrfs-progs(#yum install btrfs-progs),这样才有mkfs.btrfs命令。
另外就是要在数据节点osd上创建分区或逻辑卷供ceph使用:可以是磁盘分区(如/dev/sda2),也可以是逻辑卷(如/dev/mapper/VolGroup-lv_ceph),
只要与配置文件ceph.conf中写的一致即可。具体创建分区或逻辑卷的方法请自行google。

把ceph的bin目录加到profile文件中

[root@ceph_mds ceph]# cat /usr/local/etc/ceph/ceph.conf
;
; Sample ceph ceph.conf file.
;
; This file defines cluster membership, the various locations
; that Ceph stores data, and any other runtime options.

; If a 'host' is defined for a daemon, the init.d start/stop script will
; verify that it matches the hostname (or else ignore it). If it is
; not defined, it is assumed that the daemon is intended to start on
; the current host (e.g., in a setup with a startup.conf on each
; node).

; The variables $type, $id and $name are available to usein paths
; $type = The type of daemon, possible values: mon, mdsand osd
; $id = The ID of the daemon, for mon.alpha, $id will be alpha
; $name = $type.$id

; For example:
; osd.0
; $type = osd
; $id = 0
; $name = osd.0

; mon.beta
; $type = mon
; $id = beta
; $name = mon.beta

; global
[global]
; enable secure authentication
; auth supported = cephx

; allow ourselves to open a lot of files
max open files = 131072

; set log file
log file = /var/log/ceph/$name.log
; log_to_syslog = true ; uncomment this line to log to syslog

; set up pid files
pid file = /var/run/ceph/$name.pid
; If you want to run a IPv6 cluster, set this to true. Dual-stack isn't possible
;ms bind ipv6 = true
; monitors
; You need at least one. You need at least three if you want to
; tolerate any node failures. Always create an odd number.
[mon]
mon data = /data/mon$id
; If you are using for example the RADOS Gateway and want to have your newly created
; pools a higher replication level, you can set a default
;osd pool default size = 3
; You can also specify a CRUSH rule for new pools
; Wiki: http://ceph.newdream.net/wiki/Custom_data_placement_with_CRUSH
;osd pool default crush rule = 0

; Timing is critical for monitors, but if you want to allow the clocks to drift a
; bit more, you can specify the max drift.
;mon clock drift allowed = 1

; Tell the monitor to backoff from this warning for 30 seconds
;mon clock drift warn backoff = 30

; logging, for debugging monitor crashes, in order of
; their likelihood of being helpful :)
debug ms = 1
;debug mon = 20
;debug paxos = 20
;debug auth = 20

[mon.0]
host = ceph
mon addr = 192.168.9.245:6789


; mds
; You need at least one. Define two to get a standby.
[mds]
; where the mds keeps it's secret encryption keys
keyring = /data/keyring.$name
; mds logging to debug issues.
;debug ms = 1
;debug mds = 20

[mds.0]
host = ceph
; osd
; You need at least one. Two if you want data to be replicated.
; Define as many as you like.

[osd]
sudo = true
; This is where the osd expects its data
osd data = /data/osd$id

; Ideally, make the journal a separate disk or partition.
; 1-10GB should be enough; moreif you have fastor many
; disks. You can use a file under the osd data dir if need be
; (e.g. /data/$name/journal), but it will be slower than a
; separate disk or partition.
; This is an example of a file-based journal.
osd journal = /data/$name/journal
osd journal size = 1000 ; journal size, in megabytes

; If you want to run the journal on a tmpfs (don't), disable DirectIO
;journal dio = false

; You can change the number of recovery operations to speed up recovery
; or slow it down if your machines can't handle it
; osd recovery max active = 3

; osd logging to debug osd issues, in order of likelihood of being
; helpful
;debug ms = 1
;debug osd = 20
;debug filestore = 20
;debug journal = 20

; ### The below options only apply if you're using mkcephfs
; ### and the devs options
; The filesystem used on the volumes
osd mkfs type = btrfs
; If you want to specify some other mount options, you can do so.
; for other filesystems use 'osd mount options $fstype'
osd mount options btrfs = rw,noatime
; The options used to format the filesystem via mkfs.$fstype
; for other filesystems use 'osd mkfs options $fstype'
; osd mkfs options btrfs =

[osd.0]
host = ceph1
btrfs devs = /dev/mapper/VolGroup-lv_ceph

[osd.1]
host = ceph2
btrfs devs = /dev/mapper/VolGroup-lv_ceph



***************************************************************************************************
4. 配置网络
① 修改各节点的hostname,并能够通过hostname来互相访问
参考:http://soft.chinabyte.com/os/281/11563281.shtml
修改/etc/sysconfig/network文件以重定义自己的hostname;
修改/etc/hosts文件以标识其他节点的hostname与IP的映射关系;
重启主机后用hostname命令查看验证。
② 各节点能够ssh互相访问而不输入密码
原理就是公私钥机制,我要访问别人,那么就要把自己的公钥先发给别人,对方就能通过我的公钥验证我的身份。
例:在甲节点执行如下命令
#ssh-keygen --d
该命令会在"~/.ssh"下面生成几个文件,这里有用的是id_dsa.pub,为该节点(甲)的公钥,然后把里面的内容添加到对方节点(乙) 
"~/.ssh/"目录下的authorized_keys文件中,如果没有则创建一个,这样就从甲节点不需要密码ssh登陆到乙上了.


***************************************************************************************************
[root@ceph ~]# ceph -s
Traceback (most recent call last):
  File "/usr/local/bin/ceph", line 94, in
    import rados
ImportError: No module named rados




[root@ceph pybind]# cp /root/ceph-9.1.0/src/pybind/* /usr/local/bin/


[root@ceph bin]# ceph -s
Traceback (most recent call last):
  File "/usr/local/bin/ceph", line 919, in
    retval = main()
  File "/usr/local/bin/ceph", line 665, in main
    conffile=conffile)
  File "/usr/local/bin/rados.py", line 259, in validate_func
    return f(*args, **kwargs)
  File "/usr/local/bin/rados.py", line 284, in __init__
    self.librados = CDLL(library_path if library_path is not None else 'librados.so.2')
  File "/usr/lib64/python2.7/ctypes/__init__.py", line 360, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: librados.so.2: cannot open shared object file: No such file or directory


缺少动态连接库.so--cannot open shared object file: No such file or directory
总结下来主要有3种方法:
1. 用ln将需要的so文件链接到/usr/lib或者/lib这两个默认的目录下边
ln -s /where/you/install/lib/*.so /usr/lib
sudo ldconfig
2.修改LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/where/you/install/lib:$LD_LIBRARY_PATH
sudo ldconfig
3.修改/etc/ld.so.conf,然后刷新
vim /etc/ld.so.conf
add /where/you/install/lib
sudo ldconfig


[root@ceph bin]# whereis librados.so.2
librados.so: /usr/local/lib/librados.so.2 /usr/local/lib/librados.so
[root@ceph bin]# ln -s /usr/local/lib/librados.so.2 /usr/lib


***************************************************************************************************


***************************************************************************************************
5. 创建文件系统并启动。以下命令在监控节点进行!
#mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
遇以下问题:
(1)scp: /etc/ceph/ceph.conf: No such file or directory
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
[/usr/local/etc/ceph/fetch_config/tmp/fetched.ceph.conf.2693]
The authenticity of host 'ceph_mds (127.0.0.1)' can't be established.
RSA key fingerprint is a7:c8:b8:2e:86:ea:89:ff:11:93:e9:29:68:b5:7c:11.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'ceph_mds' (RSA) to the list of known hosts.
ceph.conf 100% 4436 4.3KB/s 00:00 
temp dir is /tmp/mkcephfs.tIHQnX8vkw
preparing monmap in /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool --create --clobber --add 0 222.31.76.178:6789 --print /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool: monmap file /tmp/mkcephfs.tIHQnX8vkw/monmap
/usr/local/bin/monmaptool: generated fsid f998ee83-9eba-4de2-94e3-14f235ef840c
epoch 0
fsid f998ee83-9eba-4de2-94e3-14f235ef840c
last_changed 2013-05-31 08:22:52.972189
created 2013-05-31 08:22:52.972189
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.tIHQnX8vkw/monmap (1 monitors)
=== osd.0 === 
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.0b3c65941572123eb704d9d614411fc1
scp: /etc/ceph/ceph.conf: No such file or directory




***************************************************************************************************
在ceph9.1.0中,已不用这种方式初始化节点了,详细配置请查看官方文档
版本0.80.5前后的安装方式不一样。


http://docs.ceph.com/docs/v0.80.5/install/manual-deployment/#monitor-bootstrapping


***************************************************************************************************
解决:编写一个脚本,将配置文件同步到/etc/ceph和/usr/local/etc/ceph目录下(需手动先建立/etc/ceph目录):
[root@ceph_mds ceph]# cat cp_ceph_conf.sh
cp /usr/local/etc/ceph/ceph.conf /etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd0:/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd1:/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd2:/usr/local/etc/ceph/ceph.conf
scp /usr/local/etc/ceph/ceph.conf root@ceph_osd2:/etc/ceph/ceph.conf
(2)
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
temp dir is /tmp/mkcephfs.hz1EcPJjtu
preparing monmap in /tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool--create--clobber--add0222.31.76.178:6789--print/tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool: monmap file/tmp/mkcephfs.hz1EcPJjtu/monmap
/usr/local/bin/monmaptool: generated fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
epoch 0
fsid 62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
last_changed 2013-05-3108:39:48.198656
created 2013-05-3108:39:48.198656
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch0 to/tmp/mkcephfs.hz1EcPJjtu/monmap (1 monitors)
=== osd.0===
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.2e991ed41f1cdca1149725615a96d0be
umount: /data/osd0:not mounted
umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 12:39:04.073438 7f02cd9ac760 -1 filestore(/data/osd0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-3112:39:04.3620107f02cd9ac760-1 created object store/data/osd0 journal/data/osd.0/journalfor osd.0 fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-31 12:39:04.362074 7f02cd9ac760 -1 auth: error reading file: /data/osd0/keyring: can't open /data/osd0/keyring: (2) No such file or directory
2013-05-31 12:39:04.362280 7f02cd9ac760 -1 created new key in keyring /data/osd0/keyring
collecting osd.0 key
=== osd.1 === 
pushing conf and monmap to ceph_osd1:/tmp/mkfs.ceph.9a9f67ff6e7516b415d30f0a89bfe0dd
umount: /data/osd1: not mounted
umount: /dev/mapper/VolGroup-lv_ceph: not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 08:39:13.237718 7ff0a2fe4760 -1 filestore(/data/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-31 08:39:13.524175 7ff0a2fe4760 -1 created object store /data/osd1 journal /data/osd.1/journal for osd.1 fsid 62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-31 08:39:13.524241 7ff0a2fe4760 -1 auth: error reading file: /data/osd1/keyring: can't open/data/osd1/keyring: (2) No such fileor directory
2013-05-3108:39:13.5244307ff0a2fe4760-1 created new keyin keyring/data/osd1/keyring
collecting osd.1 key
=== osd.2===
pushing conf and monmap to ceph_osd2:/tmp/mkfs.ceph.51a8af4b24b311fcc2d47eed2cd714ca
umount: /data/osd2:not mounted
umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3109:01:49.3718537ff422eb1760-1 filestore(/data/osd2) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3109:01:49.5830617ff422eb1760-1 created object store/data/osd2 journal/data/osd.2/journalfor osd.2 fsid62fdb8b1-8d98-42f2-9cef-b95e2ad7bd43
2013-05-3109:01:49.5831237ff422eb1760-1 auth: error reading file:/data/osd2/keyring: can't open /data/osd2/keyring: (2) No such file or directory
2013-05-31 09:01:49.583312 7ff422eb1760 -1 created new key in keyring /data/osd2/keyring
collecting osd.2 key
=== mds.alpha === 
creating private key for mds.alpha keyring /data/keyring.mds.alpha
creating /data/keyring.mds.alpha
bufferlist::write_file(/data/keyring.mds.alpha): failed to open file: (2) No such file or directory
could not write /data/keyring.mds.alpha
can't open /data/keyring.mds.alpha: can't open /data/keyring.mds.alpha: (2) No such file or directory
failed: '/usr/local/sbin/mkcephfs -d /tmp/mkcephfs.hz1EcPJjtu --init-daemon mds.alpha'
解决:手动建立这个文件:
#mkdir /data
#touch /data/keyring.mds.alpha
【创建成功】
[root@ceph_mds ceph]# mkcephfs -a -c /usr/local/etc/ceph/ceph.conf --mkbtrfs
temp dir is /tmp/mkcephfs.v9vb0zOmJ5
preparing monmap in /tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool--create--clobber--add0222.31.76.178:6789--print/tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool: monmap file/tmp/mkcephfs.v9vb0zOmJ5/monmap
/usr/local/bin/monmaptool: generated fsid652b09fb-bbbf-424c-bd49-8218d75465ba
epoch 0
fsid 652b09fb-bbbf-424c-bd49-8218d75465ba
last_changed 2013-05-3108:50:21.797571
created 2013-05-3108:50:21.797571
0: 222.31.76.178:6789/0 mon.0
/usr/local/bin/monmaptool: writing epoch0 to/tmp/mkcephfs.v9vb0zOmJ5/monmap (1 monitors)
=== osd.0===
pushing conf and monmap to ceph_osd0:/tmp/mkfs.ceph.8912ed2e34cfd2477c2549354c03faa3
umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3112:49:36.5483297f67d293e760-1 journal check: ondisk fsid919417f1-0a79-4463-903c-3fc9df8ca0f8 doesn't match expected 3b3d2772-4981-46fd-bbcd-b11957c77d47, invalid (someone else's?) journal
2013-05-3112:49:36.9536667f67d293e760-1 filestore(/data/osd0) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3112:49:37.2443347f67d293e760-1 created object store/data/osd0 journal/data/osd.0/journalfor osd.0 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-3112:49:37.2443977f67d293e760-1 auth: error reading file:/data/osd0/keyring: can't open /data/osd0/keyring: (2) No such file or directory
2013-05-31 12:49:37.244580 7f67d293e760 -1 created new key in keyring /data/osd0/keyring
collecting osd.0 key
=== osd.1 === 
pushing conf and monmap to ceph_osd1:/tmp/mkfs.ceph.69d388555243635efea3c5976d001b64
umount: /dev/mapper/VolGroup-lv_ceph: not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-31 08:49:45.012858 7f82a3d52760 -1 journal check: ondisk fsid 28f23b77-6f77-47b3-b946-7eda652d4488 doesn't match expected65a75a4f-b639-4eab-91d6-00c985118862, invalid (someone else's?) journal
2013-05-31 08:49:45.407962 7f82a3d52760 -1 filestore(/data/osd1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2013-05-31 08:49:45.696990 7f82a3d52760 -1 created object store /data/osd1 journal /data/osd.1/journal for osd.1 fsid 652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-31 08:49:45.697052 7f82a3d52760 -1 auth: error reading file: /data/osd1/keyring: can't open/data/osd1/keyring: (2) No such fileor directory
2013-05-3108:49:45.6972387f82a3d52760-1 created new keyin keyring/data/osd1/keyring
collecting osd.1 key
=== osd.2===
pushing conf and monmap to ceph_osd2:/tmp/mkfs.ceph.686b9d63c840a05a6eed5b5781f10b27
umount: /dev/mapper/VolGroup-lv_ceph:not mounted
WARNING! - Btrfs Btrfs v0.20-rc1 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using
fs created label (null) on /dev/mapper/VolGroup-lv_ceph
nodesize 4096 leafsize 4096 sectorsize 4096 size 100.00GB
Btrfs Btrfs v0.20-rc1
2013-05-3109:12:20.7087337fa54ae8f760-1 journal check: ondisk fsid dc21285e-3bde-4f53-9424-d059540ab920 doesn't match expected cae83f10-d633-48d1-b324-a64849eca974, invalid (someone else's?) journal
2013-05-3109:12:21.0571547fa54ae8f760-1 filestore(/data/osd2) couldnot find23c2fcde/osd_superblock/0//-1in index: (2) No such fileor directory
2013-05-3109:12:21.2536897fa54ae8f760-1 created object store/data/osd2 journal/data/osd.2/journalfor osd.2 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
2013-05-3109:12:21.2537497fa54ae8f760-1 auth: error reading file:/data/osd2/keyring: can't open /data/osd2/keyring: (2) No such file or directory
2013-05-31 09:12:21.253931 7fa54ae8f760 -1 created new key in keyring /data/osd2/keyring
collecting osd.2 key
=== mds.alpha === 
creating private key for mds.alpha keyring /data/keyring.mds.alpha
creating /data/keyring.mds.alpha
Building generic osdmap from /tmp/mkcephfs.v9vb0zOmJ5/conf
/usr/local/bin/osdmaptool: osdmap file '/tmp/mkcephfs.v9vb0zOmJ5/osdmap'
/usr/local/bin/osdmaptool: writing epoch 1 to /tmp/mkcephfs.v9vb0zOmJ5/osdmap
Generating admin key at /tmp/mkcephfs.v9vb0zOmJ5/keyring.admin
creating /tmp/mkcephfs.v9vb0zOmJ5/keyring.admin
Building initial monitor keyring
added entity mds.alpha auth auth(auid = 18446744073709551615 key=AQCXnKhRiL/QHhAA091/MQGD25V54smKBz959w== with 0 caps)
added entity osd.0 auth auth(auid = 18446744073709551615 key=AQDhK6hROEuRDhAA9uCsjB++Szh8sJy3CUgeoA== with 0 caps)
added entity osd.1 auth auth(auid = 18446744073709551615 key=AQBpnKhR0EKMKRAAzNWvZgkDWrPSuZaSttBdsw== with 0 caps)
added entity osd.2 auth auth(auid = 18446744073709551615 key=AQC1oahReP4fDxAAR0R0HTNfVbs6VMybLIU9qg== with 0 caps)
=== mon.0 === 
/usr/local/bin/ceph-mon: created monfs at /data/mon0 for mon.0
placing client.admin keyring in /etc/ceph/keyring






***************************************************************************************************
【启动】
#/etc/init.d/ceph -a start  //必要时先关闭防火墙(#service iptables stop)
[root@ceph_mds ceph]# /etc/init.d/ceph -a start
=== mon.0===
Starting Ceph mon.0 on ceph_mds...
starting mon.0 rank 0 at 222.31.76.178:6789/0 mon_data /data/mon0 fsid652b09fb-bbbf-424c-bd49-8218d75465ba
=== mds.alpha===
Starting Ceph mds.alpha on ceph_mds...
starting mds.alpha at :/0
=== osd.0===
Mounting Btrfs on ceph_osd0:/data/osd0
Scanning for Btrfs filesystems
Starting Ceph osd.0 on ceph_osd0...
starting osd.0 at :/0 osd_data/data/osd0/data/osd.0/journal
=== osd.1===
Mounting Btrfs on ceph_osd1:/data/osd1
Scanning for Btrfs filesystems
Starting Ceph osd.1 on ceph_osd1...
starting osd.1 at :/0 osd_data/data/osd1/data/osd.1/journal
=== osd.2===
Mounting Btrfs on ceph_osd2:/data/osd2
Scanning for Btrfs filesystems
Starting Ceph osd.2 on ceph_osd2...
starting osd.2 at :/0 osd_data/data/osd2/data/osd.2/journal


***************************************************************************************************
【查看Ceph集群状态】
[root@ceph_mds ceph]# ceph -s
health HEALTH_OK
monmap e1: 1 mons at {0=222.31.76.178:6789/0}, election epoch 2, quorum 0 0
osdmap e7: 3 osds:3 up,3in
pgmap v432: 768 pgs:768 active+clean;9518 bytes data, 16876 KB used,293 GB/300 GB avail
mdsmap e4: 1/1/1 up {0=alpha=up:active}
[root@ceph_mds ceph]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED 
300M 293M 16876 0 


POOLS:
NAME ID USED %USED OBJECTS 
data 0 0 0 0 
metadata 1 9518021
rbd 2 0 0 0
疑问:空间统计有问题吧?!"ceph -s"查看是300GB,"ceph df"查看是300M。


***************************************************************************************************
6. 客户端挂载
#mkdir /mnt/ceph
#mount -t ceph ceph_mds:/ /mnt/ceph
遇以下错误:
(1)
[root@localhost ~]# mount -t ceph ceph_mds:/ /mnt/ceph/
mount: wrong fs type, bad option, bad superblock on ceph_mds:/,
missing codepage or helper program, or other error
(for several filesystems (e.g. nfs, cifs) you might
need a /sbin/mount. helper program)
In some cases useful info is found in syslog- try
dmesg | tail or so
查看#dmesg
ceph: Unknown symbol ceph_con_keepalive (err0)
ceph: Unknown symbol ceph_create_client (err 0)
ceph: Unknown symbol ceph_calc_pg_primary (err0)
ceph: Unknown symbol ceph_osdc_release_request (err0)
ceph: Unknown symbol ceph_con_open (err 0)
ceph: Unknown symbol ceph_flags_to_mode (err 0)
ceph: Unknown symbol ceph_msg_last_put (err 0)
ceph: Unknown symbol ceph_caps_for_mode (err 0)
ceph: Unknown symbol ceph_copy_page_vector_to_user (err0)
ceph: Unknown symbol ceph_msg_new (err 0)
ceph: Unknown symbol ceph_msg_type_name (err 0)
ceph: Unknown symbol ceph_pagelist_truncate (err0)
ceph: Unknown symbol ceph_release_page_vector (err0)
ceph: Unknown symbol ceph_check_fsid (err 0)
ceph: Unknown symbol ceph_pagelist_reserve (err0)
ceph: Unknown symbol ceph_pagelist_append (err0)
ceph: Unknown symbol ceph_calc_object_layout (err0)
ceph: Unknown symbol ceph_get_direct_page_vector (err0)
ceph: Unknown symbol ceph_osdc_wait_request (err0)
ceph: Unknown symbol ceph_osdc_new_request (err0)
ceph: Unknown symbol ceph_pagelist_set_cursor (err0)
ceph: Unknown symbol ceph_calc_file_object_mapping (err0)
ceph: Unknown symbol ceph_monc_got_mdsmap (err0)
ceph: Unknown symbol ceph_osdc_readpages (err 0)
ceph: Unknown symbol ceph_con_send (err 0)
ceph: Unknown symbol ceph_zero_page_vector_range (err0)
ceph: Unknown symbol ceph_osdc_start_request (err0)
ceph: Unknown symbol ceph_compare_options (err0)
ceph: Unknown symbol ceph_msg_dump (err 0)
ceph: Unknown symbol ceph_buffer_new (err 0)
ceph: Unknown symbol ceph_put_page_vector (err0)
ceph: Unknown symbol ceph_pagelist_release (err0)
ceph: Unknown symbol ceph_osdc_sync (err 0)
ceph: Unknown symbol ceph_destroy_client (err 0)
ceph: Unknown symbol ceph_copy_user_to_page_vector (err0)
ceph: Unknown symbol __ceph_open_session (err 0)
ceph: Unknown symbol ceph_alloc_page_vector (err0)
ceph: Unknown symbol ceph_monc_do_statfs (err 0)
ceph: Unknown symbol ceph_monc_validate_auth (err0)
ceph: Unknown symbol ceph_osdc_writepages (err0)
ceph: Unknown symbol ceph_parse_options (err 0)
ceph: Unknown symbol ceph_str_hash (err 0)
ceph: Unknown symbol ceph_pr_addr (err 0)
ceph: Unknown symbol ceph_buffer_release (err 0)
ceph: Unknown symbol ceph_con_init (err 0)
ceph: Unknown symbol ceph_destroy_options (err0)
ceph: Unknown symbol ceph_con_close (err 0)
ceph: Unknown symbol ceph_msgr_flush (err 0)
Key type ceph registered
libceph: loaded (mon/osd proto15/24, osdmap5/65/6)
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
我发现客户端mount命令根本没有ceph类型(无"mount.ceph"),而我们配置的其他节点都有mount.ceph,所以我在ceph客户端上也重新编译了最新版的ceph-0.60。
(2)编译安装ceph-0.60后mount还是报同样的错,查看dmesg
#dmesg | tail
Key type ceph unregistered
Key type ceph registered
libceph: loaded (mon/osd proto15/24, osdmap5/65/6)
ceph: loaded (mds proto 32)
libceph: parse_ips bad ip 'ceph_mds'
libceph: no secret set (for auth_x protocol)
libceph: error -22 on auth protocol 2 init
libceph: client4102 fsid 652b09fb-bbbf-424c-bd49-8218d75465ba


最终查明原因,是因为mount时还需要输入用户名和密钥,具体mount命令为:
#mount.ceph ceph_mds:/ /mnt/ceph -v -o name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
[root@localhost ~]# mount.ceph ceph_mds:/ /mnt/ceph -v -o name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
parsing options: name=admin,secret=AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
上述命令中的name和secret参数值来自monitor的/etc/ceph/keyring文件:
[root@ceph_mds ceph]# cat /etc/ceph/keyring
[client.admin]
key = AQCXnKhRgMltJRAAi0WMqr+atKFPaIV4Aja4hQ==
注:
1. To mount the Ceph file system you may use the mount command if you know the monitor host IP address(es), or use the mount.ceph utility to resolve the monitor host name(s) into IP address(es) for you.
2. mount参数
    -v, --verbose:Verbose mode.
    -o, --options opts:Options are specified with a -o flag followed by a comma separated string of options.
3. mount.ceph参考:http://ceph.com/docs/master/man/8/mount.ceph/


***************************************************************************************************
查看客户端的挂载情况:
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 13G 35G 27%/
tmpfs 2.0G 02.0G0%/dev/shm
/dev/sda1 477M 48M 405M11%/boot
/dev/mapper/VolGroup-lv_home
405G 71M 385G 1%/home
222.31.76.178:/ 300G 6.1G 294G 3% /mnt/ceph


P.S. 网上说若不想每次输入密钥这么繁琐,可以在配置文件ceph.conf中加入以下字段(并记得同步到其他节点),但我实验发现还是无效,所以暂且采用上述方法挂载使用,有哪位朋友知道我错在哪欢迎指出啊。
[mount /] 
     allow = %everyone
【解决】
查看了官网文档http://ceph.com/docs/master/rados/operations/authentication/,发现真正要取消挂载时的认证,需要在配置文件[global]下加入以下内容:
对于0.51及以上版本:
auth cluster required = none
auth service required = none
auth client required = none
对于0.50及以下版本:
auth supported = none
官方注:If your cluster environment is relatively safe, you can offset the computation expense of running authentication. We do not recommend it. However, it may be easier during setup and/or troubleshooting to temporarily disable authentication.

***************************************************************************************************

到此Ceph的安装配置就已全部完成,可以在客户端的/mnt/ceph目录下使用Ceph分布式文件系统。
近期我还将进行Ceph的功能性验证测试,测试报告敬请期待!

如果用el6系统,编译9.1.0的时候会出现很多代码库的问题,尽量不要使用el6系统






来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/29500582/viewspace-1831377/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/29500582/viewspace-1831377/

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值