Linux学习之文件系统zfs文件系统之zpool命令详解

本文详细介绍了Linux中ZFS文件系统的zpool命令,讲解了如何创建、管理及检查ZFS存储池,包括创建、扩展、删除和检查zpool的基本操作,帮助读者掌握ZFS高级文件系统管理。
摘要由CSDN通过智能技术生成

ZPOOL(8)                                                                   System Manager's Manual                                                                   ZPOOL(8)

NAME
     zpool — configure ZFS storage pools

SYNOPSIS
     zpool -?
     zpool add [-fgLnP] [-o property=value] pool vdev...
     zpool attach [-f] [-o property=value] pool device new_device
     zpool clear pool [device]
     zpool create [-dfn] [-m mountpoint] [-o property=value]... [-o feature@feature=value] [-O file-system-property=value]... [-R root] pool vdev...
     zpool destroy [-f] pool
     zpool detach pool device
     zpool events [-vHfc] [pool]
     zpool export [-a] [-f] pool...
     zpool get [-Hp] [-o field[,field]...] all|property[,property]... pool...
     zpool history [-il] [pool]...
     zpool import [-D] [-c cachefile|-d dir]
     zpool import -a [-DfmN] [-F [-n] [-T] [-X]] [-c cachefile|-d dir] [-o mntopts] [-o property=value]... [-R root]
     zpool import [-Dfm] [-F [-n] [-T] [-X]] [-c cachefile|-d dir] [-o mntopts] [-o property=value]... [-R root] [-s] pool|id [newpool [-t]]
     zpool iostat [[[-c SCRIPT] [-lq]]|-rw] [-T u|d] [-ghHLpPvy] [[pool...]|[pool vdev...]|[vdev...]] [interval [count]]
     zpool labelclear [-f] device
     zpool list [-HgLpPv] [-o property[,property]...] [-T u|d] [pool]... [interval [count]]
     zpool offline [-f] [-t] pool device...
     zpool online [-e] pool device...
     zpool reguid pool
     zpool reopen pool
     zpool remove pool device...
     zpool replace [-f] [-o property=value] pool device [new_device]
     zpool scrub [-s | -p] pool...
     zpool set property=value pool
     zpool split [-gLnP] [-o property=value]... [-R root] pool newpool [device]...
     zpool status [-c SCRIPT] [-gLPvxD] [-T u|d] [pool]... [interval [count]]
     zpool sync [pool]...
     zpool upgrade
     zpool upgrade -v
     zpool upgrade [-V version] -a|pool...

DESCRIPTION
     The zpool command configures ZFS storage pools.  A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets.  All
     datasets within a storage pool share the same space.  See zfs(8) for information on managing datasets.

   Virtual Devices (vdevs)
     A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics.  The following virtual
     devices are supported:

     disk    A block device, typically located under /dev.  ZFS can use individual slices or partitions, though the recommended mode of operation is to use whole disks.  A
             disk can be specified by a full path, or it can be a shorthand name (the relative portion of the path under /dev).  A whole disk can be specified by omitting
             the slice or partition designation.  For example, sda is equivalent to /dev/sda.  When given a whole disk, ZFS automatically labels the disk, if necessary.

     file    A regular file.  The use of files as a backing store is strongly discouraged.  It is designed primarily for experimental purposes, as the fault tolerance of a
             file is only as good as the file system of which it is a part.  A file must be specified by a full path.

     mirror  A mirror of two or more devices.  Data is replicated in an identical fashion across all components of a mirror.  A mirror with N disks of size X can hold X
             bytes and can withstand (N-1) devices failing before data integrity is compromised.

     raidz, raidz1, raidz2, raidz3
             A variation on RAID-5 that allows for better distribution of parity and eliminates the RAID-5 "write hole" (in which data and parity become inconsistent after a
             power loss).  Data and parity is striped across all disks within a raidz group.

             A raidz group can have single-, double-, or triple-parity, meaning that the raidz group can sustain one, two, or three failures, respectively, without losing
             any data.  The raidz1 vdev type specifies a single-parity raidz group; the raidz2 vdev type specifies a double-parity raidz group; and the raidz3 vdev type
             specifies a triple-parity raidz group.  The raidz vdev type is an alias for raidz1.

             A raidz group with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is
             compromised.  The minimum number of devices in a raidz group is one more than the number of parity disks.  The recommended number is between 3 and 9 to help
             increase performance.

     spare   A special pseudo-vdev which keeps track of available hot spares for a pool.  For more information, see the Hot Spares section.

     log     A separate intent log device.  If more than one log device is specified, then writes are load-balanced between devices.  Log devices can be mirrored.  However,
             raidz vdev types are not supported for the intent log.  For more information, see the Intent Log section.

     cache   A device used to cache storage pool data.  A cache device cannot be configured as a mirror or raidz group.  For more information, see the Cache Devices section.

     Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks.  Mirrors of mirrors (or other combinations) are not allowed.

     A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs").  Data is dynamically distributed across all top-level devices to
     balance data among devices.  As new virtual devices are added, ZFS automatically places data on the newly available devices.

     Virtual devices are specified one at a time on the command line, separated by whitespace.  The keywords mirror and raidz are used to distinguish where a group ends and
     another begins.  For example, the following creates two root vdevs, each a mirror of two disks:

     # zpool create mypool mirror sda sdb mirror sdc sdd

   Device Failure and Recovery
     ZFS supports a rich set of mechanisms for handling device failure and data corruption.  All metadata and data is checksummed, and ZFS automatically repairs bad data
     from a good copy when corruption is detected.

     In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or raidz groups.  While ZFS supports running in a
     non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged.  A single case of bit corruption can render some or all of
     your data unavailable.

     A pool's health status is described by one of three states: online, degraded, or faulted.  An online pool has all devices operating normally.  A degraded pool is one in
     which one or more devices have failed, but the data is still available due to a redundant configuration.  A faulted pool has corrupted metadata, or one or more faulted
     devices, and insufficient replicas to continue functioning.

     The health of the top-level vdev, such as mirror or raidz device, is potentially impacted by the state of its associated vdevs, or component devices.  A top-level vdev
     or component device is in one of the following states:

     DEGRADED  One or more top-level vdevs is in the degraded state because one or more component devices are offline.  Sufficient replicas exist to continue functioning.

               One or more component devices is in the degraded or faulted state, but sufficient replicas exist to continue functioning.  The underlying conditions are as
               follows:

               ·   The number of checksum errors exceeds acceptable levels and the device is degraded as an indication that something may be wrong.  ZFS continues to use the
                   device as necessary.

               ·   The number of I/O errors exceeds acceptable levels.  The device could not be marked as faulted because there are insufficient replicas to continue func‐
                   tioning.

     FAULTED   One or more top-level vdevs is in the faulted state because one or more component devices are offline.  Insufficient replicas exist to continue functioning.

               One or more component devices is in the faulted state, and insufficient replicas exist to continue functioning.  The underlying conditions are as follows:

               ·   The device could be opened, but the contents did not match expected values.

               ·   The number of I/O errors exceeds acceptable levels and the device is faulted to prevent further use of the device.

     OFFLINE   The device was explicitly taken offline by the zpool offline command.

     ONLINE    The device is online and functioning.

     REMOVED   The device was physically removed while the system was running.  Device removal detection is hardware-dependent and may not be supported on all platforms.

     UNAVAIL   The device could not be opened.  If a pool is imported when a device was unavailable, then the device will be identified by a unique identifier instead of its
               path since the path was never correct in the first place.

     If a device is removed and later re-attached to the system, ZFS attempts to put the device online automatically.  Device attach detection is hardware-dependent and
     might not be supported on all platforms.

   Hot Spares
     ZFS allows devices to be associated with pools as "hot spares".  These devices are not actively used in the pool, but when an active device fails, it is automatically
     replaced by a hot spare.  To create a pool with hot spares, specify a spare vdev with any number of devices.  For example,

     # zpool create pool mirror sda sdb spare sdc sdd

     Spares can be shared across multiple pools, and can be added with the zpool add command and removed with the zpool remove command.  Once a spare replacement is initi‐
     ated, a new spare vdev is created within the configuration that will remain there until the original device is replaced.  At this point, the hot spare becomes available
     again if another device fails.

     If a pool has a shared spare that is currently being used, the pool can not be exported since other pools may use this shared spare, which may lead to potential data
     corruption.

     An in-progress spare replacement can be canceled by detaching the hot spare.  If the original faulted device is detached, then the hot spare assumes its place in the
     configuration, and is removed from the spare list of all active pools.

     Spares cannot replace log devices.

   Intent Log
     The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous transactions.  For instance, databases often require their transactions to be on stable storage
     devices when returning from a system call.  NFS and other applications can also use fsync(2) to ensure data stability.  By default, the intent log is allocated from
     blocks within the main pool.  However, it might be possible to get better performance using separate intent log devices such as NVRAM or a dedicated disk.  For example:

     # zpool create pool sda sdb log sdc

     Multiple log devices can also be specified, and they can be mirrored.  See the EXAMPLES section for an example of mirroring multiple log devices.

     Log devices can be added, replaced, attached, detached, and imported and exported as part of the larger pool.  Mirrored log devices can be removed by specifying the
     top-level mirror for the log.

   Cache Devices
     Devices can be added to a storage pool as "cache devices".  These devices provide an additional layer of caching between main memory and disk.  For read-heavy work‐
     loads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low
     latency media.  Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content.

     To create a pool with cache devices, specify a cache vdev with any number of devices.  For example:

     # zpool create pool sda sdb cache sdc sdd

     Cache devices cannot be mirrored or part of a raidz configuration.  If a read error is encountered on a cache device, that read I/O is reissued to the original storage
     pool device, which might be part of a mirrored or raidz configuration.

     The content of the cache devices is considered volatile, as is the case with other system caches.

   Properties
     Each pool has several properties associated with it.  Some properties are read-only statistics while others are configurable and change the behavior of the pool.

     The following are read-only properties:

     available
             Amount of storage available within the pool.  This property can also be referred to by its shortened column name, avail.

     capacity
             Percentage of pool space used.  This property can also be referred to by its shortened column name, cap.

     expandsize
             Amount of uninitialized space within the pool or device that can be used to increase the total capacity of the pool.  Uninitialized space consists of any space
             on an EFI labeled vdev which has not been brought online (e.g, using zpool online -e).  This space occurs when a LUN is dynamically expanded.

     fragmentation
             The amount of fragmentation in the pool.

     free    The amount of free space available in the pool.

     freeing
             After a file system or snapshot is destroyed, the space it was using is returned to the pool asynchronously.  freeing is the amount of space remaining to be
             reclaimed.  Over time freeing will decrease while free increases.

     health  The current health of the pool.  Health can be one of ONLINE, DEGRADED, FAULTED, OFFLINE, REMOVED, UNAVAIL.

     guid    A unique identifier for the pool.

     size    Total size of the storage pool.

     unsupported@feature_guid
             Information about unsupported fe
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值