A Quick and Dirty Guide to iSCSI Implementation

 
A Quick and Dirty Guide to iSCSI Implementation
-Rajeev Karamchedu

Introduction
How does it work?
Implementation Environment
Target Setup on the Filer
Initiator Setup
Automounting iSCSI
How Fast Is It ?
Implementing iSCSI Security
iSNS, Internet Storage Name Service
References



Introduction
iSCSI (Internet Small Computer System Interface), an IP-based storage networking standard for connecting data storage entities, developed by the Internet Engineering Task Force (IETF). Essentially a protocol carrying SCSI commands over IP networks, iSCSI can be a cheap and effective alternative for storage administrators who are otherwise struggling to strike a balance between the high-cost and complex implementations of FC-SAN (Fibre Channel Storage Area Networks) and the performance limitations and overhead of NAS (Network Attached Storage).

For the sake of completeness, it should be noted that there are TWO other protocols similar to iSCSI where IP networks are used to move data packets. iFCP and FCIP. Both of those protocols are NOT implemented on a server but are rather used to connect remote SANs together. iFCP translates a Fibre Channel frame to IP and then translates it back after it reaches the destination. FCIP is a tunneling protocol to send FC frames in tact over IP.

How does it work?
iSCSI is a Client-Server based protocol, except that the client process requesting data is called an “initiator” and the server process serving the data is called a “target”. In an iSCSI implementation, the storage that is offered by the target appears as local disk to the initiator. The client can make block-based operations on that storage. Since this involves formatting the disk, partitioning, creating a file system on the disk, no more than one initiator can have read/write access to the specific iSCSI device at a time. However, it is possible that you can mount a iSCSI device as read-only on multiple initiators.

When the initiator (client) receives request for a piece of data on the server, it translates that request into pure SCSI commands and assembles that into an IP packet (additionally performing encapsulation and/or encryption). Using the IP networks, then that packet can be sent to the target (server), without any distance limitations that traditional SCSI suffers. On the target, the iSCSI protocol extracts the SCSI commands (performing de-capsulation and/or de-cryption in the process) and sends the SCSI commands to the SCSI controller. The protocol is also bi-directional so the data can be sent back as a reply to the request.
The downside of iSCSI is that all this processing can be a burden on the client’s cpu. That can be solved by using iSCSI-HBAs which are much like Fibre Channel HBAs but for iSCSI. iSCSI-HBAs are used to offload the processing overhead from the primary CPU(s) to the dedicated HBA. iSCSI HBAs are relatively much cheaper than the FC-HBAs, at the time of this writing. iSCSI-HBAs are referred to as “hardware initiators”. “Software Initiators” are simply drivers loaded into the OS that will enable iSCSI communications. Currently, we can use the following software initiators for free: the iSCSI Microsoft Windows Initiator software, the iSCSI Linux Initiator software and the iSCSI NetWare Initiator software. Those who have accounts with Cisco can also download the Cisco iSCSI driver for Linux, HPUX, Windows and Solaris.
Implementation Environment
This document reflects the following environment-specific implementation details and command sets. Consult the Resources section for links to other implementations.

 

Target: Network Appliance Filer running Data ONTAP 6.5.x
Initiator: Intel P4 Desktop running Suse 9.2 (2.6.8-24.14-smp)

 

Fibre Channel implementation uses WWPN (World Wide Port Names) and WWNN (World Wide Node Names) to identify devices. iSCSI uses iSCSI addresses. Once all the targets and initiators are assigned and configured with iSCSI addresses, these “nodes” need to know about each other, akin to the DNS resolution. Once they are aware of each other, they can communicate and the initiator can access the storage from the target. Details of how this is accomplished is discussed later in this document.

iSCSI addresses come in two formats: iSCSI Qualified Name (iqn) or IEEE EUI-64 (eui) format. A (very) brief discussion of the two formats is below.

 

  • iqn Format: iqn.yyyy-mm.backward_naming_authority:unique_device_name
    For e.g. on a linux box, the iSCSI initiator address may look like his:
    iqn.1987-05.com.cisco:01.12a14c2dcab9

     

     

  • eui Format: eui.nnnnnnnnnnnnnnnn
    eui Format is used when a manufacturer is already registered with the IEEE Registration Authority and uses EUI-64 formatted worldwide unique names for its products

 

The iscsi driver/initiator software found in today’s Linux distributions is a open-source version of the one Cisco provides and uses the iqn format. The Network Appliance file servers also use an iqn format for target addressing.

 


Target Setup on the Filer

The iSCSI protocol on the filer is implemented via a software driver, much like the Linux software driver. Data ONTAP includes a virual adapter for iSCSI communications ( iswt). This adapter provides two logical iSCSI adapaters: iswta and iswtb for active and failover purposes.

 

  1. Make sure you have a valid iSCSI license. Depending upon your filer support status, make and model, you can get a iSCSI license for free via the Netapp iSCSI enabling program. Once the license is installed, you can stop and start the iscsi service on the filer’s console using these commands:

     

    filer:> iscsi {start|stop}

     

     

  2. The filer uses gives the iscsi interface a default target address. It uses the filer serial number to generate the nodename. You can choose to keep it or change it following the naming schemes.

    filer:> iscsi nodename
    iSCSI target nodename: iqn.1992-08.com.netapp:sn.11111111

     

  3. Set the filer iSCSI security to NONE. We will discuss later how to add security.


    filer:> iscsi security default -s none
    default sec is none

     

     

  4. Setup LUNs on the Target (Filer):

    Using Data ONTAP, disk drives are logically carved into Volumes. Volumes can hold files, LUNs and qtree. A qtree is a sub-directory of the root of a volume. Qtrees can hold LUNs, files and other sub-directories. It is also possible to create two qtrees, one containing LUNs for iSCSI purposes and another containing files/directories for NFS purposes. For the purposes of testing, we chose to have two qtrees in a volume, one for iSCSI LUNs and one for NFS sharing. Performance data is collected individually on these areas for comparision.

    When setting up volumes on the filer for iSCSI purposes, note the following NetApp recommendations:

    1. Do not create volumes on the filer’s root volume (/vol/vol0)
    2. Use qtrees to separate iSCSI LUNs and non-iSCSI files/directories
    3. Use qtrees to hold all LUNs for a particular host
    4. Set snapshot reserve percentage to 0% (or disable snapshots)
    5. Turn off automatic snapshot scheduling
    6. Ensure that the volume option create_ucode is turned ON.

    We used a spare volume called “scratch” that we already had on the filer. We created a qtree called “iscsi”and in there we created a LUN called iscsi0.

    filer:> qtree create /vol/scratch/iscsi

     

    We also disabled snapshots on this volume completely.

    filer:> vol options scratch
    nosnap=on, nosnapdir=on, minra=off, no_atime_update=off,
    raidtype=raid4, raidsize=8, nvfail=off, snapmirrored=off,
    resyncsnaptime=60, create_ucode=on, convert_ucode=off,
    maxdirsize=10240, fs_size_fixed=off, create_reserved=off,
    fractional_reserve=100
    filer>


    Use the Data ONTAP “lun setup” commands to create and setup LUNS for iSCSI. lun setup is a command line wizard that will walk you through the process of creating LUNS and associated igroups and the mapping process. In Data ONTAP, LUNS and igroups are created. Think if an igroup and a unix group. It contains a list of all initiators as members. When a LUN is mapped to an igroup, all the initiators listed in the igroup have access to that LUN provided the underlying iSCSI authentication is successful (more on that later).

    filer:> lun setup

    This setup will take you through the steps needed to create LUNs
    and to make them accessible by initiators. You can type ^C (Control-C)
    at any time to abort the setup and no unconfirmed changes will be made
    to the system.

    Do you want to create a LUN? [y]: y
    OS type of LUN (image/solaris/windows/hpux/aix/linux) [image]:

    A LUN path must be absolute. A LUN can only reside in a volume
    or qtree root. For example, to create a LUN with the name “lun0″
    in the qtree root /vol/vol/q0, specify the path as “/vol/vol1/q0/lun0″.

    Enter LUN path: /vol/scratch/iscsi/iscsi0

    A LUN can be created with or without space reservations being enabled.
    Space reservation guarantees that data writes to that LUN will never fail.

    Do you want the LUN to be space reserved? [y]: y

    Size for a LUN is specified in bytes. You can use single-character
    multiplier suffixes: b(sectors), k(KB), m(MB), g(GB) or t(TB).

    Enter LUN size: 4g

    You can add a comment string to describe the contents of the LUN.
    Please type a string (without quotes), or hit ENTER if you don’t
    want to supply a comment.

    Enter comment string: iSCSI Testing

    The LUN will be accessible to an initiator group. You can use an
    existing group name, or supply a new name to create a new initiator
    group. Enter `?’ to see existing initiator group names.

    Name of initiator group[]: iSCSI_Test

    Type of initiator group iSCSI_TEst (FCP/iSCSI)[FCP]: iscsi

    An iSCSI initiator group is a collection of initiator node names. Each
    node name can begin with either `eui.’ or `iqn.’ and should be in the
    following formats: eui.{EUI-64 address} or iqn.yyyy-mm.{reserved domain
    name}:{any string}.
    Eg. iqn.2001-04.com.acme:storage.tape.sys1.xyz or eui.02004567A25678D

    You can separate node names by commas. Enter `?’ to display a list of
    connected initiators. Hit ENTER when you are done adding port names to this
    group.

    [You can obtain the initiator node name from /etc/initiatorname.iscsi on the initiator system.]

    Enter comma separated nodenames: iqn.1987-05.com.cisco:01.12a14c2dcab9

    The initiator group has an associated OS type. The following are
    currently supported: solaris, windows, hpux, aix, linux, or default

    OS type of initiator group “iSCSI_Test” [windows]: default

    The LUN will be accessible to all the initiators in the
    initiator group. Enter `?’ to display LUNs already in use
    by one or more initiators in group “iSCSI_Test“.

    LUN ID at which initiator group “iSCSI_Test” sees “/vol/vol1/lun0″ [0]:

    [ If you press Enter to accept the default, Data ONTAP issues the lowest
    valid unallocated LUN ID to map it to the initiator, starting with
    zero. Alternatively, you can enter any valid number. For information
    about valid LUN IDs for your host initiator, see the documentation
    provided with your iSCSI host Initiator Support Kit or with your SAN
    Host Attach Kit for iSCSI Protocol on your host. ]

    After pressing “Enter” here, the program displays the summary of the selections for confirmation.

    LUN Path : /vol/vol1/q0/lun0
    OS Type : windows
    Size : 4g
    Comment : iSCSI Test LUN
    Initiator Group : iSCSI_Test
    Initiator Group Type : iscsi
    Initiator Group Members : iqn.1987-05.com.cisco:01.12a14c2dcab9
    Mapped to LUN-ID : 0

    Do you want to accept this configuration? [y]
    Do you want to create another LUN? [n]

At this point, the target is pretty much setup and is ready to receive iSCSI requests.

Initiator Setup

  1. In SUSE 9.2, the open source version of the Cisco iSCSI driver comes pre-installed. The module has to be loaded using the command “modprobe iscsi”. Ensure that it is loaded

     

    suse92:# lsmod | grep iscsi
    iscsi 208428 2
    scsi_mod 121412 6 iscsi,sg,st,sr_mod,libata,sd_mod
    suse92: #

     

  2. The configuration file for iscsi is /etc/iscsi.conf. Make sure the file has the following permissions. Atleast one vendor has shipped this file in the past with world-readable permissions. This file is used to store the authentication information (to be dealt later), so ensure that this file is secure

    suse92:# ls -l /etc/iscsi.conf
    -rw——- 1 root root 17264 Jun 11 16:36 /etc/iscsi.conf
    suse92:# cp /etc/iscsi.conf /etc/iscsi.conf.orig

  3. Edit the file /etc/iscsi.conf and set the following values
    DiscoveryAddress=[Enter the Netapp Filer IP Address]
    Continuous=no
    HeaderDigest=never
    DataDigest=never
    ImmediateData=yes

     

  4. Start the iscsi service
    suse92: # /etc/init.d/iscsi start

     

  5. Verify that you were able to see that iSCSI devices from the target using the iscsi-ls command
    suse92: # /sbin/iscsi-ls
    *******************************************************************************
    Cisco iSCSI Driver Version … 4.0.197 ( 21-May-2004 )
    *******************************************************************************
    TARGET NAME : iqn.1992-08.com.netapp:sn.11111111
    TARGET ALIAS :
    HOST NO : 2
    BUS NO : 0
    TARGET ID : 0
    TARGET ADDRESS : 172.xx.xx.xx:3260
    SESSION STATUS : ESTABLISHED AT Sat Jun 11 16:36:35 2005
    NO. OF PORTALS : 1
    PORTAL ADDRESS 1 : 172.xx.xx.xx:3260,2
    SESSION ID : ISID 00023d000001 TSID 585
    *******************************************************************************

     

  6. You can also check out the file /var/lib/iscsi/bindings file to find out which iSCSI targets have been discovered by the iSCSI daemon and the LUN IDs. This file is where the Linux iSCSI driver stores and gets its persistent binding information from.
    suse92:/var/lib/iscsi # cat /var/lib/iscsi/bindings

    # iSCSI bindings, file format version 1.0.
    # NOTE: this file is automatically maintained by the iSCSI daemon.
    # You should not need to edit this file under most circumstances.
    # If iSCSI targets in this file have been permanently deleted, you
    # may wish to delete the bindings for the deleted targets.
    #
    # Format:
    # bus target iSCSI
    # id id TargetName
    #
    0 0 iqn.1992-08.com.netapp:sn.11111111

     

  7. Look in the /var/log/messages to see the activity

    Jun 11 16:18:24 suse92 kernel: iSCSI: bus 0 target 0 = iqn.1992-08.com.netapp:sn.11111111
    Jun 11 16:18:24 suse92 kernel: iSCSI: bus 0 target 0 portal 0 = address 172.xx.xx.xx port 3260 group 2
    Jun 11 16:18:24 suse92 kernel: iSCSI: starting timer thread at 418674703
    Jun 11 16:18:24 suse92 kernel: iSCSI: bus 0 target 0 trying to establish session to portal 0, address 172.xx.xx.xx port 3260 group 2
    92 kernel: iSCSI: bus 0 target 0 established session #1, portal 0, address 172.xx.xx.xx port 3260 group 2
    Jun 11 16:18:24 suse
    92 kernel: Vendor: NETAPP Model: LUN Rev: 0.2
    Jun 11 16:18:24 suse
    92 kernel: Type: Direct-Access ANSI SCSI revision: 04
    Jun 11 16:18:24 suse
    92 kernel: iSCSI: session iscsi_bus 0 target id 0 recv_cmd cdb 0×0, status 0×2, response 0×0, senselen 18, key 06, ASC/ASCQ 29/00, itt 4 to (4 0 0 0), iqn.1992-08.com.netapp:sn.11111111
    Jun 11 16:18:24 suse
    92 kernel: iSCSI: Sense f0000600 0000000a 00000000 29000000 0000
    Jun 11 16:18:24 suse
    92 kernel: SCSI device sda: 8388608 512-byte hdwr sectors (4295 MB)
    Jun 11 16:18:24 suse
    92 kernel: SCSI device sda: drive cache: write through
    Jun 11 16:18:24 suse
    92 kernel: SCSI device sda: 8388608 512-byte hdwr sectors (4295 MB)
    Jun 11 16:18:24 suse
    92 kernel: SCSI device sda: drive cache: write through
    Jun 11 16:18:24 suse
    92 kernel: sda: sda1
    Jun 11 16:18:24 suse
    92 kernel: Attached scsi disk sda at scsi4, channel 0, id 0, lun 0
    Jun 11 16:18:24 suse
    92 kernel: Attached scsi generic sg0 at scsi4, channel 0, id 0, lun 0, type 0

    Jun 11 16:18:24 suse

     

  8. We see that the 4GB LUN we have created on the target is available to the initiator at the scsi device address sda. Due to the fact that the SCSI devices nodes are mapped to the LUNS dynamically upon detection, there is no guarantee that the device node mappings will be consistent across reboots. In order to provide a consistent device mapping, the linux iscsi driver provides a new device tree under /dev/iscsi that provide device paths to the iSCSI LUNs. For e.g., the device path /dev/iscsi/bus0/target0/lun0/disk refers to the entire LUN with ID 0, on target 0, bus 0. The last part “disk” refers to the entire disk (in Solaris terms, it would be slice 2, s2)

    In practice however, on the SUSE Linux system, we noticed that that is not the case. Using udev, SUSE prefers to create the persistent device names in /dev/disk. In our setup, after loading the module, we found the dev entries created under /dev/disk/by-id/iqn.1992-08.com.netapp:sn.11111111. Actually, they are just symlinks to /dev/sdX devices.

    For the client system (initiator), it is just another SCSI disk. You can now use the OS specific disk formatting tools such as fdisk and mkfs to format and partition the disk and install the filesystem of our choice. As we create partitions in that disk, you will notice that the new device paths will become available in /dev/disk/by-id directory for mounting. In our setup, we chose to create 1 primary partition with id 1 and give it all the space. We also a ext3 on it. After doing that, these are the entries you see in /dev/disk/by-id directory as a result:

    suse92:/dev/disk/by-id # ls -l
    lrwxrwxrwx 1 root root 9 Jun 12 18:46 iscsi-iqn.1992-08.com.netapp:sn.11111111-0 -> ../../sda
    lrwxrwxrwx 1 root root 10 Jun 12 18:46 iscsi-iqn.1992-08.com.netapp:sn.11111111-0p1 -> ../../sda1To mount the filesystem, we should use the appropriate device name.suse92: mount /dev/disk/by-id/iscsi-iqn.1992-08.com.netapp:sn.11111111-0p1 /mnt
    suse92: cd /mnt
    suse92: df -h .Filesystem Size Used Avail Use% Mounted on
    /dev/sda1 4.0G 33M 3.8G 1% /mnt

     

     

     

    You will also notice that the usable space is not equal to the 4 GB. That is because we have build a file system on it and that will take up some space.

At this point, we have a working iSCSI implementation. The following sections describe some of the important aspects of the implementation.Automounting iSCSI
Note that iSCSI uses IP Protocol for searching and accessing LUNS. During the normal system boot up, however, when the system is ready to mount its devices, the IP protocol is not yet configured and hence it cannot see the LUNs. In order to mount the iSCSI LUNs at boot time, the Linux driver uses the file /etc/fstab.iscsi. The format is identical to the OS /etc/fstab. After the system boots up, the iSCSI driver issues the command iscsi-mountall which consults the file /etc/fstab.iscsi and mounts the file systems provided within.

 

suse92: cat /etc/fstab.iscsi
# /etc/fstab.iscsi file for filesystems built on iscsi devices.
#
# A typical entry here would look like:
# /dev/disk/by-id/iscsi-iqn.2001-04.example.com:storage:disk2.sys1 /mnt ext2 defaults 0 0
#
/dev/disk/by-id/iscsi-iqn.1992-08.com.netapp:sn.11111111-0p1 /mnt ext3 defaults 0 0

 

How Fast Is It?
In order to see how fast iSCSI really is in comparision with NFS, we used Bonnie, a tool used to benchmark filesystem operations. As discussed before, we have one volume on the Netapp Filer called scratch that has both iSCSI LUNs and NFS exports. We mounted the NFS share from /vol/scratch/nfs at /local/scratch_nfs and we know that we have iSCSI LUN mounted at /mnt. The Dell Desktop we were using has about 1GB of Memory, so we asked bonnie to create two volumes and specified 1GB of size each to take care of any cacheing issues (flags to the command are: -s 1024 -v 2).

Results Summary
 Sequential Output (nosync)Sequential InputRandom Seek
ProtocolFile Sizeper char KB/sper block KB/sec rewrite KB/sper char KB/sper block KB/sRandom Seeks per sec
iSCSI20481340813896518078759766277.2
NFS2048109681113444741126011181338.6

Implementing iSCSI Security
In this section, we will outline how iSCSI security is implemented and managed, specific to the environment described above. In an iSCSI implementation, two complementary security mechanisms are employed.

  1. In-Band Authentication: Authentication carried out between targets and initiators, using CHAP
  2. Packet Protection: Achieved by implementing IPSec on the packets

On the Netapp Target Filer, the iscsi service can be setup to perform authentication. When the filer recieves a login request from an initiator to begin an iSCSI sessions, the filer can be configured to:

  • perform (inbound and/or outbound) CHAP based authentication
  • Deny service (service is denied in this case if the initiators are not in the list.)
  • None - Filer is not setup to do authentication and all iSCSI sessions are allowed by default.

To bolster our current iSCSI setup, we will add inbound CHAP authentication on the filer as the default method. We will specify a inbound username and password on the Filer and activate a default CHAP authentication method. We will then use the same username and password and configure the initiator to send to the target at connection time. By doing so, only those initiators that are provided with the Target authentication information will be able to access the LUNs.

  1. On the filer, generate a random CHAP password
    filer:> iscsi security generate
    Generated Random Secret: 0xbb86df2afa74063955e6e95607031116

     

     

  2. Define a CHAP username and password as a default authentication method
    filer:> iscsi security default -s CHAP -n iSCSI_ntap -p 0xbb86df2afa74063955e6e95607031116

     

  3. Verify the setup on the target
    filer:> iscsi security show
    default sec is CHAP Inbound password: **** Inbound username: iSCSI_ntap Outbound password: **** Outbound username:

     

  4. Login to the initiator system and specify the authentication credentials

    suse92:# vi /etc/iscsi.conf

    Below the line “DiscoveryAddress”, add these two lines and make sure there is a tab before the variable. The reason for tabs is that you can specify multiple usernames and passwords for different targets and the file is parsed using indents by the iscsi driver.

    DiscoveryAddress=[IP Address/HostName of the Target]
    [INSERT TAB]OutgoingUsername=iSCSI_ntap
    [INSERT TAB]OutgoingPassword=0xbb86df2afa74063955e6e95607031116

    Note that on the target, the username and password are specified as “incoming username” and “incoming passwords” and on the initiator, they are configured as “OutgoingUsername” and “OutgoingPassword”. If bi-directional authentication is desired, then a different username and password combination can be specified on the target as “Outgoing Username” and “Outgoing Password” and on the initiator as “Incoming Username” and “IncomingPassword”.

  5. Ensure the secure permissions on the /etc/iscsi.conf file and reload the iscsi driver.

    suse92:/tmp # ls -l /etc/iscsi.conf
    -rw——- 1 root root 17264 Jun 11 16:36 /etc/iscsi.conf
    suse92:/tmp # /etc/init.d/iscsi reload
    Reload service iSCSI done
    suse92:/tmp # tail /var/log/messages

    Jun 11 16:40:40 suse92 iscsid[5739]: authenticated by target

iSNS, Internet Storage Name Service
In the above sections, we were working with iSCSI target and initiator addresses. We edited the /etc/iscsi.conf file on the initiator and specified the Target’s address. This is strikingly similar to /etc/hosts setup that we use for DNS resolution. For hostname resolution, we use DNS protocol to specify multiple repositories that will automate and manage hostname resolution information. A similar protocol, dubbed iSNS, is being considered by IETF to facilitate automated discovery, management and configuration of FC and iSCSI devices on the TCP/IP network. More information about iSNS can be found in the Resources Section

References

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
HP 3PAR是一种企业级存储解决方案,适用于Windows Server 2008和Windows Server 2012操作系统。以下是HP 3PAR在这些操作系统上的实施指南: 1. 系统要求:在开始实施HP 3PAR之前,需要确保Windows Server 2008或Windows Server 2012操作系统已经安装并经过配置。同时,还需要安装HP 3PAR软件套件和相应的驱动程序。 2. 存储配置:首先,需要为HP 3PAR配置存储容量。通过HP 3PAR存储管理器可以创建虚拟卷,分配存储空间给Windows Server。可以根据需求配置RAID级别、存储策略以及LUN映射等设置。 3. 连接设置:接下来,需要将Windows Server与HP 3PAR存储进行物理连接。根据网络拓扑,可以使用Fibre Channel或者iSCSI等协议进行连接。确保服务器上已安装相关的驱动程序,并使用正确的网络配置来建立物理连接。 4. 驱动程序安装:在Windows Server上安装HP 3PAR驱动程序,以实现与存储之间的通信。这些驱动程序可以从HP官方网站下载并按照指南进行安装。确保在安装过程中选择适合的操作系统版本和体系结构。 5. 配置主机:在Windows Server上配置主机以与HP 3PAR进行通信。通过HP 3PAR管理控制台可以添加主机,并分配相应的存储卷给主机。确保主机使用正确的WWN或者iSCSI Initiator名称与HP 3PAR进行识别和通信。 6. 测试和验证:完成实施后,进行测试和验证以确保HP 3PAR与Windows Server的正常工作。可以使用HP 3PAR管理控制台来检查存储和主机的连接状态,验证数据在两者之间的传输是否正常。 总之,实施HP 3PAR在Windows Server 2008和Windows Server 2012上需要按照指南进行系统配置、存储配置、连接设置、驱动程序安装、主机配置以及测试和验证等步骤。这些步骤确保了HP 3PAR与Windows Server之间的正常通信和数据传输。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值