NFS or Block for VMWare workloads

There are many different views on what architecture to select for VMWare workloads. Some people prefer NAS architectures as datastore sizes can generally be much bigger for large-scale consolidation and many of the NAS features such as space-optimized snapshots can help in protecting Virtual machines without the need for additional backup software. This is our view based on experiences using block and NFS as a storage platform for VMWare. Since Vsphere 5, VMWare release the NAS VAAI primitives for NFS storage vendors to implement, so many of the offloading features previously unavailable to NFS has been addressed – or has it?

Let’s begin with comparing the Full File Clone (NAS) primitive and the XCOPY (block) primitive. Both are used when copying data, but there is a significant difference. Storage vMotion operations cannot leverage the Full File Clone primitive on NAS arrays – the primitive can only be leveraged when a cold migration (powered off VM) is initiated. The XCOPY primitive can be leveraged by Storage vMotion for powered on VMs however. This is a major advantage that block storage has, especially when it comes to features like Storage DRS which rely heavily on Storage vMotion functionality for load balancing.

What does this mean in the real word -:

1) When you need to deploy a new machine based on a template, the file clone primitive or XCOPY primitive is used to offload the functions to the array which speeds up the deployment significantly, and due to most NAS array’s supporting fast file clones, the deployment could be done in seconds, however usually only to the same NFS file system. This could make deployements on NFS unpredictable as machines cloned to the same file system runs really fast while clones to different file systems can take a very long time.

2) When migrating live machines between two datastores, the XCopy primitive for block-based storage will speed up migration, as long as the target datastore for the migration runs on the same storage controller. Migrations between NFS datastores still need to leverage VMWare to do the heavy lifting as the VAAI primitive does not support live machines, resulting in lots more overhead and traffic generated during the storage VMotion

If your environment consists of many templates distributed across datastores and you need to do frequent provisioning jobs across datastores, block is probably the way to go, however you need to ensure your storage array supports VAAI and is powerful enough to handle your workload.

Another thing to consider is the use of Large VM’s. Firstly let me qualify what I mean by a Large VM. When I say Large VM in this case I mean a VM with 1-8TB of disk.

So what’s the problem? Well the problem is when creating templates for these Large VM’s so you can provision them rapidly. These templates are thin provisioned so you don’t waste unnecessary space. Also the systems you deploy from these templates will also thin provisioned. Where the problem comes in is when you want to migrate these VM’s to a different Datastore and you are using NFS. It all comes down to the Data Mover. A Data Mover is the component within VMware vSphere that moves data from place to place when doing operations such as Storage vMotion and other types of storage migrations (clones). In vSphere today there are three different Data Movers.

The three Data Movers are FSDM, which is the most simple, most compatible and slow data mover (highest level in the IO stack too), FS3DM, more advanced and faster, and FS3DM Hardware Accelerated (offloads to the array).

When going from  NAS, which is NFS, to either another NAS (NFS filesystem), or a VMFS5 datastore on an iSCSI array  is where you hit a  snag. Going between arrays regardless of their VAAI capability, won’t use FS3DM Hardware Accelerated Data Mover, that’s only used within the same array. But going between a NAS and VMFS5, or from one NAS to another NAS won’t even use FS3DM. It will only use FSDM, the most compatible, but also the slowest and simplest of the Data Mover species. Why is this so important?

FSDM will read every block of a VM, even if it has never been written to. You read that correctly, even if the block has never existed, it will be read and transferred. This is very bad news if you have a Large VM or Large VM Template using Thin Provisioning. This effects every vendor using NFS with VMware vSphere. One of the reasons it probably isn’t that well known or noticeable is that most VM’s have very small storage footprints, relatively speaking, especially when they are initial provisioned.

The real solution is that the process VMware uses for migrating large VM’s between arrays where a NAS or NFS is involved, especially those that might be thin provisioned, needs to change. The FSDM Data Mover doesn’t cut it, not for Monster VM’s. Copying TB’s worth of zeros that have never been written to is of absolutely no value. It can be done much smarter. Perhaps this might come in the form of some new VAAI for NAS Primitive in the future

 

 

 

 

 

Written by