This procedure assumes that both nodes of the cluster are configured more or less identical. This will completely reset all logical disk pathing including those entries for the system boot disks. Systems having different hardware configurations may still not match after this procedure.
- Create a checkpoint of the current system to fall back to if necessary.
- Determine the current checkpoint associated syspool dataset that is used for the root filesystem. (ie. syspool/rootfs-nmu-000) This will be listed as the "Current" and "Default" entry for the "show appliance checkpoint" command.
- Shutdown and reboot into GRUB or boot from CDROM
- Select "Safe Mode" from regular GRUB menu or "Recovery Console" if booted from CDROM.
- If asked for a login, login as "root" with no password.
- Force import syspool. "zpool import -f syspool"
- Create a mountpoint in /tmp and mount the current checkpoint to it. For example, "mkdir /tmp/a ; mount -F zfs syspool/rootfs-nmu-000 /tmp/a"
- Remove all entries from /tmp/a/dev/[r]dsk "rm /tmp/a/dev/dsk/* ; rm /tmp/a/dev/rdsk/*"
- Remove all entries from /tmp/a/dev/cfg "rm -rf /tmp/a/dev/cfg/*"
- Remove /tmp/a/etc/devices/devid_cache
- Remove /tmp/a/etc/devices/mdi_scsi_vhci_cache
- Put down a new device tree via devfsadm directed to the current checkpoint. "devfsadm -r /tmp/a"
- Update the boot archive of the current checkpoint "bootadm update-archive -R /tmp/a"
- Unmount the current checkpoint "umount /tmp/a"
- Reboot
- Repeat same steps for the other node.
- If the system fails to boot, fall back to the checkpoint created in step 1.