How to Migrate /var to a dedicated ZFS Dataset on a ZFS Root Pool (rpool) with Boot Environments in Solaris 10

By default, Solaris 10 installs the Solaris Operating System to a ZFS rpool with a flat filesystem layout. It is sometimes desirable to have a dedicated /var ZFS dataset to which quotas can be applied. At OS installation time, a separate /var can be defined. The goal of this post is to demonstrate how to separate the /var directory to a new ZFS dataset without having to re-install the Solaris OS.

1. Make sure you have a FULL backup of the Operating system and everything within the rpool.

2. Reboot the host to single-user mode. This ensures the /var directory is quiesced and less likely to be modified while the procedure is being executed.

# reboot -- -s

3. Verify the current list and status of any Boot Environments that exist on the host:

# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
150400-50                  yes      yes    yes       no     -

4. Verify what ZFS datasets already exist on the host:

# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                 13.2G   260G   106K  /rpool
rpool/ROOT            6.05G   260G    31K  legacy
rpool/ROOT/150400-50  6.05G   260G  6.05G  /
rpool/dump            3.01G   260G  3.00G  -
rpool/export            63K   260G    32K  /export
rpool/export/home       31K   260G    31K  /export/home
rpool/swap            4.13G   261G  4.00G  -

5. Verify how much space is currently consumed by the /var directory:

# cd var/
# du -hs .
1.6G .

6. Create a new temporary /var dataset (/tmpvar) under the currently mounted root dataset and mount it:

# zfs create -o mountpoint=/tmpvar -o canmount=noauto rpool/ROOT/150400-50/var
# zfs mount rpool/ROOT/150400-50/var
# zfs get mounted rpool/ROOT/150400-50/var
NAME                      PROPERTY  VALUE    SOURCE
rpool/ROOT/150400-50/var  mounted   yes      -
# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     13.2G   260G   106K  /rpool
rpool/ROOT                6.05G   260G    31K  legacy
rpool/ROOT/150400-50      6.05G   260G  6.05G  /
rpool/ROOT/150400-50/var    31K   260G    31K  /tmpvar  <-- Verify the new dataset is mounted
rpool/dump                3.01G   260G  3.00G  -
rpool/export                63K   260G    32K  /export
rpool/export/home           31K   260G    31K  /export/home
rpool/swap                4.13G   261G  4.00G  -

7. Copy the data from the current /var directory to the temporary /tmpvar zfs dataset and verify we have all the data in the new location:

# cd /var
# find . -print -depth | cpio -pPdm /tmpvar/
# cd /tmpvar
# du -hs .
 1.6G   .

8. Rename the current /var directory and change the mountpoint of the new var dataset to be /var. This will switch over to the new separated /var dataset.

# cd /
# mv /var /var.orig
# zfs set mountpoint=/var rpool/ROOT/150400-50/var

9. Create a new Boot Environment so Live Upgrade can create new ICF files pertaining to the new var dataset. Watch the output from lucreate and verify it creates an entry in the ICF files for the new var dataset:


# lucreate -n 150400-50_newvar
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment [150400-50_newvar].
Source boot environment is [150400-50].
Creating file systems on boot environment [150400-50_newvar].
Populating file systems on boot environment [150400-50_newvar].
Analyzing zones.
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for [rpool/ROOT/150400-50] on [rpool/ROOT/150400-50@150400-50_newvar].
Creating clone for [rpool/ROOT/150400-50@150400-50_newvar] on [rpool/ROOT/150400-50_newvar].
Creating snapshot for [rpool/ROOT/150400-50/var] on [rpool/ROOT/150400-50/var@150400-50_newvar].
Creating clone for [rpool/ROOT/150400-50/var@150400-50_newvar] on [rpool/ROOT/150400-50_newvar/var].  <-- This is what to look out for
Mounting ABE [150400-50_newvar].
Generating file list.
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE [150400-50_newvar].
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE [150400-50].
Making boot environment [150400-50_newvar] bootable.
Population of boot environment [150400-50_newvar] successful.
Creation of boot environment [150400-50_newvar] successful.

As an option verification step, find and cat the contents of the new ICF file for the new Boot Environment to verify the /var dataset is listed.


# cd /etc/lu
# cat ICF.1
150400-50_newvar:-:/dev/zvol/dsk/rpool/swap:swap:8388608
150400-50_newvar:/:rpool/ROOT/150400-50_newvar:zfs:19256618
150400-50_newvar:/export:rpool/export:zfs:126
150400-50_newvar:/export/home:rpool/export/home:zfs:62
150400-50_newvar:/rpool:rpool:zfs:34218125
150400-50_newvar:/var:rpool/ROOT/150400-50_newvar/var:zfs:3281527  <-- The new /var dataset is listed

10. Activate the new Boot Environment:

# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
150400-50                  yes      yes    yes       no     -
150400-50_newvar           yes      no     no        yes    -
# luactivate 150400-50_newvar
A Live Upgrade Sync operation will be performed on startup of boot environment [150400-50_newvar].


**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

     At the PROM monitor (ok prompt):
     For boot to Solaris CD:  boot cdrom -s
     For boot to network:     boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/150400-50
     zfs set mountpoint=[mountpointName] rpool/ROOT/150400-50
     zfs mount rpool/ROOT/150400-50

4. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     [mountpointName]/sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.
6. umount /mnt
7. zfs set mountpoint=/ rpool/ROOT/150400-50
8. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Activation of boot environment [150400-50_newvar] successful.</luactivate>

11. Reboot the host after activation.

# shutdown -y -g0 -i6

12. After the system reboots successfully to multi-user mode, cleanup the old Boot Environment and old /var to free space:

# zfs list
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
rpool                                             16.4G   257G   106K  /rpool
rpool/ROOT                                        9.28G   257G    31K  legacy
rpool/ROOT/150400-50                              16.3M   257G  7.62G  /
rpool/ROOT/150400-50/var                           420K   257G  1.56G  /var
rpool/ROOT/150400-50_newvar                       9.27G   257G  7.62G  /
rpool/ROOT/150400-50_newvar@150400-50_newvar      77.0M      -  7.62G  -
rpool/ROOT/150400-50_newvar/var                   1.57G   257G  1.56G  /var
rpool/ROOT/150400-50_newvar/var@150400-50_newvar  6.63M      -  1.56G  -
rpool/dump                                        3.01G   257G  3.00G  -
rpool/export                                        63K   257G    32K  /export
rpool/export/home                                   31K   257G    31K  /export/home
rpool/swap                                        4.13G   257G  4.00G  -
# rm -rf /var.orig/
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
150400-50                  yes      no     no        yes    -
150400-50_newvar           yes      yes    yes       no     -
# ludelete 150400-50