Solaris: UFS to ZFS, LiveUpgrade and Patching

本文详细介绍了如何将服务器从UFS引导迁移到ZFS 2-way镜像,如何升级到Solaris 10u6,以及如何进行日常补丁更新。主要步骤包括:从UFS迁移到ZFS引导,清理旧的UFS引导环境,Live Upgrade到S10u6,ZFS的第二阶段清理,以及ZFS的日常补丁应用。文章强调在每个阶段确保所有区域可运行或可引导,并正确设置环境变量,同时警告不要盲目复制粘贴命令,以防止损坏系统或数据丢失。
摘要由CSDN通过智能技术生成
 


This article gives a detailed overview, how we migrate our servers from UFS to ZFS boot 2-way mirros, how they are upgraded to Solaris™ 10u6 aka 10/08 with /var on a separate ZFS and finally how to accomplish "day-to-day" patching. The main stages are devided into:

Make sure, that on each stage all zones are running or at least bootable, and the environment variables show below are properly set. Also give your brain a chance and think before you blindly copy-and-paste any commands mentioned in this article! There is no guarantee, that the commands shown in this article match exactly your system and thus may damage it/cause data loss, if you do not adjust them to your needs!

'til now, the shown procedures have been successfully tested on a Sun Fire V240 server (2 GB RAM, 2x1.5GHz UltraSPARC-IIIi, 2x36GB HDDi, 2x 72GB HDDi) with one running sparse zone, only. However, our plan is to migrate ~93% of our servers (Sun Fire 280R, 420R, V440, V490, T1000, X4500, X4600) to S10u6 an thus any further problems we see, will be reflected in this article - i.e. watch out for changes!

setenv CD /net/install/pool1/install/sparc/Solaris_10_u6-ga1
setenv JUMPDIR /net/install/pool1/install/jumpstart
mount /local/misc
set path = ( /usr/bin /usr/sbin /local/misc/sbin )

Moving from UFS to ZFS boot

  1. update to S10u6 aka 10/08 via recommended/feature patching

    on pre U4 systems SUNWlucfg is probably missing:

    pkgadd -d $CD/Solaris_10/Product SUNWlucfg

    make sure, that the following patches are installed:

  2. determine the HDD for the new root pool aka rpool

    echo | format

    In this example we use: c0t1d0

  3. format the disk so that the whole disk can be used by ZFS

    # on x86
    fdisk -B /dev/rdsk/c0t1d0p0

    # on sparc delete all slices and assign all blocks to s0
    format -d c0t1d0

    If you want to use mirroring, make sure, that s0 of HDD0 and HDD1 have finally the same size (use number of blocks when specifying its size)

    Part      Tag    Flag     Cylinders         Size            Blocks
    0 root wm 0 - 29772 34.41GB (29773/0/0) 72169752
    1 unassigned wm 0 0 (0/0/0) 0
    2 backup wu 0 - 29772 34.41GB (29773/0/0) 72169752
    3 unassigned wm 0 0 (0/0/0) 0
    4 unassigned wm 0 0 (0/0/0) 0
    5 unassigned wm 0 0 (0/0/0) 0
    6 unassigned wm 0 0 (0/0/0) 0
    7 unassigned wm 0 0 (0/0/0) 0
  4. move all UFS zones to ZFS mirror 'pool1' on HDD2 and HDD3

    This allows LU to use zone snapshots instead of copying the stuff - thus magnitudes faster. In our example, UFS zones are in /export/scratch/zones/ ; pool1 mountpoint is on /pool1 .

    One may use the following ksh snipplet (requires GNU sed!):

    ksh
    zfs create pool1/zones
    # adjust this and the following variable
    UFSZONES="zone1 zone2 ..."
    UFSZPATH="/export/scratch/zones"
    for ZNAME in $UFSZONES ; do
    zlogin $ZNAME 'init 5'
    done
    echo 'verify
    commit
    ' >/tmp/zone.cmd
    for ZNAME in $UFSZONES ; do
    # and wait, 'til $ZNAME is down
    while true; do
    zoneadm list | /usr/xpg4/bin/grep -q "^$ZNAME"'$'
    [ $? -ne 0 ] && break
    done
    zfs create pool1/zones/$ZNAME
    mv $UFSZPATH/$ZNAME/* /pool1/zones/$ZNAME/
    chmod 700 /pool1/zones/$ZNAME
    gsed -i /
    -e "/zonepath=/ s,$UFSZPATH/$ZNAME,/pool1/zones/$ZNAME," /
    /etc/zones/$ZNAME.xml
    zonecfg -z $ZNAME -f /tmp/zone.cmd
    zoneadm -z $ZNAME boot
    done
    exit
  5. create the rpool

    zpool create -f -o failmode=continue rpool c0t1d0s0
    # some bugs? require us to do the following manually
    zfs set mountpoint=/rpool rpool
    zfs create -o mountpoint=legacy rpool/ROOT
    zfs create -o canmount=noauto rpool/ROOT/zfs1008BE
    zfs create rpool/ROOT/zfs1008BE/var
    zpool set bootfs=rpool/ROOT/zfs1008BE rpool
    zfs set mountpoint=/ rpool/ROOT/zfs1008BE
  6. create the ZFS based Boot Environment (BE)

    lucreate -c ufs1008BE -n zfs1008BE -p rpool

    ~25min on V240

    At least here one probably ask itself, why we do not use pool1 for boot, then form a mirror of HDD0 and HDD1 and put another BE on the mirror. The answer is pretty simple: because some machines like the thumper aka X4500 can boot from 2 special disks, only (c5t0d0 and c5t4d0).

  7. move BE's /var to a separate ZFS within the BE

    zfs set mountpoint=/mnt rpool/ROOT/zfs1008BE
    zfs mount rpool/ROOT/zfs1008BE
    zfs create rpool/ROOT/zfs1008BE/mnt
    cd /mnt/var
    find . -depth -print | cpio -puvmdP@ /mnt/mnt/
    rm -rf /mnt/mnt/lost+found
    cd /mnt; rm -rf /mnt/var
    zfs rename rpool/ROOT/zfs1008BE/mnt rpool/ROOT/zfs1008BE/var
    zfs umount rpool/ROOT/zfs1008BE
    zfs set mountpoint=/ rpool/ROOT/zfs1008BE
    zfs set canmount=noauto rpool/ROOT/zfs1008BE/var

    ~7 min on V240

  8. activate the new ZFS based BE

    luactivate zfs1008BE

    copy the output of the command to a safe place, e.g. USB stick

  9. restart the machine

    init 6
  10. after reboot, check that everything is ok

    E.g.:

    df -h
    dmesg
    # PATH should be /pool1/zones/$zname-zfs1008BE for none-global zones
    zoneadm list -iv
    lustatus

Cleaning up

  1. destroy old UFS BE

    ludelete ufs1008BE

    One will get warnings about not beeing able to delete ZFSs for the old bootenv like /.alt.tmp.b-LN.mnt/pool1/zones/$zname - that's ok. Furthermore one can promote its clones like /pool1/zones/$zname-zfs1008BE and than remove the old ones including their snapshots on desire.

  2. make sure, everything is still ok

    init 6
  3. move all remaining filesystems from HDD0 to the root pool

    Depending on the mount hierarchy, the following recipe needs to

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值