Configuring PernixData FVP 3.0

In this post we will complete the blog series regarding FVP 3.0. We will configure the software for the usage.

Blog series

PernixData FVP 3.0 installation – part 1
PernixData FVP 3.0 configuration – part 2

Configuring FVP 3.0

Log onto the PernixData Management Console using the HTTP address we configured earlier. Remember, if you changed ports during the install then you need to change them here too:


Log in using the account you the service account you created


Once connected, select the “Licensing” tab, then select your vCenter Server and click “Enter FVP License Key” to add you licnese key. Otherwise click on “Start Trial” to start a 30 days evalutaion period.


FVP si collegherà al servizio online per autorizzare la vostra licenza e a questo punto sarete pronti per iniziare ad usare FVP 3.0. Questo è un miglioramento notevole rispetto alle vecchie versioni che coinvolgevano license server e il download di response files.

FVP will then go online and authorise your license…and that’s all nothied else needed on license side. This is a big improvement on the old method which involve license servers and downloading response files.

Creating FVP Clusters

Next, select FVP from the PernixData Hub drop down and you can see an overview of your FVP clusters


Click “Create” under FVP Clusters to begin the creation of your first FVP cluster


Name your new FVP cluster and select the vSphere cluster where you want to enable FVP


Click on the new cluster to open the cluster details



Configuring Acceleration Resources

Select “Configuration” to begin configuring acceleration resources. Click “Add…” to select the flash (or RAM) you wish to use. If you wish to use RAM to accelerate your storage then you have to allocate a minimum of 4GB and you can grow up to 1TB of RAM per host (in 4GB intervals). You cannot use RAM and flash together from the same host.



Add Datastores to be accelerated

Select the Datastores/VMs tab and click “Add Datastores”. Select the datastores you wish to accelerate. Adding the datastore automatically adds all VMs associated with that datastore to the FVP cluster, and any new VMs added to the datastore will receive the datastore’s default policy.

Next select the Write policy – there are two options

  1. Write Through
  2. Write Back.

If you need more info about the differences between the two type, there a great post by Frank Denneman here.


To add only specific VMs, or to change the Write Policy for individual VMs, you can add VMs individually to the cluster using the “Add VMs…” button.

Other configuration

Fault Domains – configuring a fault domain allows you to replicate write data intelligently. It ensure a copy of each write is replicated to a different rack, or blade chassis to protect against the failure of such. By default the hosts configured in the FVP Cluster are located under Default Fault Domain.

Blacklist – if you want to ensure a VM is never accelerated, you can add it to the black list

VADP – Specify VADP VMs in your environment so that FVP works seamlessly with relevant backup processes. VADP VMs won’t be accelerated by FVP.

Network Configuration – by default FVP will chose the vMotion network for all of it’s replication traffic, but you can specify a network dedicated just to FVP. Be sure you have a VMkernel on each ESXi host on the network you wish to use, and ensure they can all communicate with each other.



Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.