How To Move Proxmox VM Template With Linked Clones?
Case: While using Proxmox GUI, moving a VM template and its linked clone to a different storage pool or server node results in the full clone, the moved clone size equals linked clone + template base image.
Solution: Using ZFS send and receive commands to move templates and linked clones, this is done using proxmox shell.
Credited Post:https://forum.proxmox.com/threads/how-to-move-linked-clone-with-base-to-another-node.34455/
Using ZFS send and receive to move proxmox templates and linked clones
For this tutorial, the template disk is hdd:base-103-disk-0, and the clone disk is hdd:base-103-disk-0/vm-101-disk-0, both are in the raw format on the zfs pool named hdd, both source hdd and the target ssd storages are ZFS pools, here is more detail on ZFS from Proxmox.
Image size for both template and linked clone:
zfs get volsize,used hdd/base-103-disk-0
zfs get volsize,used,refer hdd/vm-101-disk-0

The command output shows the configured volume size and the used size for both the template and linked clone, it also shows the referenced size for the linked clone which is equal to the template used size.
Before using zfs send and receive commands, make sure to turn off the linked clone.
As discussed earlier, we will be sending our images from the hdd pool to the ssd pool, at first the template image is sent in the form of a snapshot, in terms of zfs it is referred to as a full stream of data, and later snapshots (Linked Clones) are referred as incremental data.
Some zfs snapshot concepts
By default, when a template is created a related snapshot is also created, for e.g, if the base disk image is base-103-disk-0 then the related snapshot created is base-103-disk-0@__base__, the part after -0 is @__base__ which shows that its a snapshot, also all snapshot names should start with @, for example, diskname@any-snapshot-name, this can also be viewed with the following zfs commands.
zfs list -t snapshot: shows snapshots for all pools.
zfs list -r -t snapshot hdd: shows snapshots for hdd pool.
zfs list -r -t snapshot ssd: shows snapshots for ssd pool.
zfs list -t all: shows all disks and snapshots in every zfs storage pool.
-r and -t parameters are used for recursive and type respectively, for more info, type man zfs list in proxmox shell.
Here is the output for zfs list -r -t all hdd:

As we can see there is a template base image base-103-disk-0 and a related snapshot base-103-disk-0@__base__, there is also a disk image for the linked clone which is vm-101-disk-0 but there is no snapshot for it, as zfs send and receive need snapshot name as an argument so we need to create a snapshot for our linked clone as well.
Use the following zfs command to create the snapshot:
zfs snapshot vm-101-disk-0@snap1: where @snap1 is manually added to the end of disk name which fulfils the naming convention for defining a snapshot. The command will take a little while to create the snapshot: vm-101-disk-0@snap1.
The zfs send and receive command has the following general syntax:
- zfs send source-storage-pool/snapshot-name | zfs receive target-storage-pool/diskname: This is to be used on the same server node, also the receive can be shorten to recv.
- zfs send source-storage-pool/snapshot-name | ssh login@servername zfs receive target-storage-pool/diskname: This is to be used while migrating to another server node.
For e.g in our case:
For sending template base disk snapshot to target pool : zfs send -Rv hdd/base-103-disk-0@__base__ | zfs receive ssd/base -103-disk-0.
Where –R stands for replication, which contains all properties, snapshots, descendent file systems, and related clones information, while –v stands for verbose and provides realtime progress output.
Once the base image transfer is complete then we need to take the clone’s snapshot first: zfs snapshot hdd/vm-101-disk-0@snap1.
For sending the linked clone disk snapshot to the target pool, we also need to give reference to base disk snapshot as well:
zfs send -Rv -i hdd/base-103-disk-0@__base__ hdd/vm-101-disk-0@snap1 | zfs receive ssd/vm-101-disk-0
Where -i indicates that vm-101-disk-0@snap1 is an incremental snapshot to base disk snapshot.
You can repeat the process if you have multiple clones.
here is the zfs list output on the hdd storage pool,

here is the output for ssd pool after zfs send and receive.

Final Verification
change hdd pool to ssd for both template and linked clone at /etc/pve/qemu-server/103.conf and /etc/pve/qemu-server/101.conf, turn on the linked clone, if it works then we are successful in moving the proxmox template and linked clone without destroying their interrelationship. Now destroy the template and the linked clone in the hdd storage pool with the following command:
zfs destroy -R hdd/base-103-disk-0
Be careful while using this command as it will delete the template base disk and all related clones altogether, so once you have done migrating all the clones to the target storage or target node only then you should use this command on the local server.