This article gives a detailed overview, how we migrate our servers from UFS to ZFS boot 2-way mirros, how they are upgraded to Solaris™ 10u6 aka 10/08 with /var on a separate ZFS and finally how to accomplish "day-to-day" patching. The main stages are devided into:
- Moving from UFS to ZFS boot
- Cleaning up the old UFS Boot Environment
- Upgrade to S10u6
- Cleanup - Phase 2 (ZFS)
- day-to-day patching (ZFS)
Make sure, that on each stage all zones are running or at least bootable, and the environment variables show below are properly set. Also give your brain a chance and think before you blindly copy-and-paste any commands mentioned in this article! There is no guarantee, that the commands shown in this article match exactly your system and thus may damage it/cause data loss, if you do not adjust them to your needs!
'til now, the shown procedures have been successfully tested on a Sun Fire V240 server (2 GB RAM, 2x1.5GHz UltraSPARC-IIIi, 2x36GB HDDi, 2x 72GB HDDi) with one running sparse zone, only. However, our plan is to migrate ~93% of our servers (Sun Fire 280R, 420R, V440, V490, T1000, X4500, X4600) to S10u6 an thus any further problems we see, will be reflected in this article - i.e. watch out for changes!
setenv CD /net/install/pool1/install/sparc/Solaris_10_u6-ga1
setenv JUMPDIR /net/install/pool1/install/jumpstart
mount /local/misc
set path = ( /usr/bin /usr/sbin /local/misc/sbin )
Moving from UFS to ZFS boot
update to S10u6 aka 10/08 via recommended/feature patching
on pre U4 systems SUNWlucfg is probably missing:
pkgadd -d $CD/Solaris_10/Product SUNWlucfg
make sure, that the following patches are installed:
137137-09 - U6 kernel Patch and its lu/zfs boot dependencies:
119252-26, 119254-59, 119313-23, 121430-29, 121428-11, 124628-08, 124630-19
see also: Solaris™ Live Upgrade Software: Minimum Patch Requirements
see also: checkpatches.sh
determine the HDD for the new root pool aka rpool
echo | format
In this example we use: c0t1d0
format the disk so that the whole disk can be used by ZFS
# on x86
fdisk -B /dev/rdsk/c0t1d0p0
# on sparc delete all slices and assign all blocks to s0
format -d c0t1d0If you want to use mirroring, make sure, that s0 of HDD0 and HDD1 have finally the same size (use number of blocks when specifying its size)
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 29772 34.41GB (29773/0/0) 72169752
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 29772 34.41GB (29773/0/0) 72169752
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0move all UFS zones to ZFS mirror 'pool1' on HDD2 and HDD3
This allows LU to use zone snapshots instead of copying the stuff - thus magnitudes faster. In our example, UFS zones are in /export/scratch/zones/ ; pool1 mountpoint is on /pool1 .
One may use the following ksh snipplet (requires GNU sed!):
ksh
zfs create pool1/zones
# adjust this and the following variable
UFSZONES="zone1 zone2 ..."
UFSZPATH="/export/scratch/zones"
for ZNAME in $UFSZONES ; do
zlogin $ZNAME 'init 5'
done
echo 'verify
commit
' >/tmp/zone.cmd
for ZNAME in $UFSZONES ; do
# and wait, 'til $ZNAME is down
while true; do
zoneadm list | /usr/xpg4/bin/grep -q "^$ZNAME"'$'
[ $? -ne 0 ] && break
done
zfs create pool1/zones/$ZNAME
mv $UFSZPATH/$ZNAME/* /pool1/zones/$ZNAME/
chmod 700 /pool1/zones/$ZNAME
gsed -i /
-e "/zonepath=/ s,$UFSZPATH/$ZNAME,/pool1/zones/$ZNAME," /
/etc/zones/$ZNAME.xml
zonecfg -z $ZNAME -f /tmp/zone.cmd
zoneadm -z $ZNAME boot
done
exitcreate the rpool
zpool create -f -o failmode=continue rpool c0t1d0s0
# some bugs? require us to do the following manually
zfs set mountpoint=/rpool rpool
zfs create -o mountpoint=legacy rpool/ROOT
zfs create -o canmount=noauto rpool/ROOT/zfs1008BE
zfs create rpool/ROOT/zfs1008BE/var
zpool set bootfs=rpool/ROOT/zfs1008BE rpool
zfs set mountpoint=/ rpool/ROOT/zfs1008BEcreate the ZFS based Boot Environment (BE)
lucreate -c ufs1008BE -n zfs1008BE -p rpool
~25min on V240
At least here one probably ask itself, why we do not use pool1 for boot, then form a mirror of HDD0 and HDD1 and put another BE on the mirror. The answer is pretty simple: because some machines like the thumper aka X4500 can boot from 2 special disks, only (c5t0d0 and c5t4d0).
move BE's /var to a separate ZFS within the BE
zfs set mountpoint=/mnt rpool/ROOT/zfs1008BE
zfs mount rpool/ROOT/zfs1008BE
zfs create rpool/ROOT/zfs1008BE/mnt
cd /mnt/var
find . -depth -print | cpio -puvmdP@ /mnt/mnt/
rm -rf /mnt/mnt/lost+found
cd /mnt; rm -rf /mnt/var
zfs rename rpool/ROOT/zfs1008BE/mnt rpool/ROOT/zfs1008BE/var
zfs umount rpool/ROOT/zfs1008BE
zfs set mountpoint=/ rpool/ROOT/zfs1008BE
zfs set canmount=noauto rpool/ROOT/zfs1008BE/var~7 min on V240
activate the new ZFS based BE
luactivate zfs1008BE
copy the output of the command to a safe place, e.g. USB stick
restart the machine
init 6
after reboot, check that everything is ok
E.g.:
df -h
dmesg
# PATH should be /pool1/zones/$zname-zfs1008BE for none-global zones
zoneadm list -iv
lustatus
Cleaning up
destroy old UFS BE
ludelete ufs1008BE
One will get warnings about not beeing able to delete ZFSs for the old bootenv like /.alt.tmp.b-LN.mnt/pool1/zones/$zname - that's ok. Furthermore one can promote its clones like /pool1/zones/$zname-zfs1008BE and than remove the old ones including their snapshots on desire.
make sure, everything is still ok
init 6
move all remaining filesystems from HDD0 to the root pool
Depending on the mount hierarchy, the following recipe needs to