


Recently, we announced VMware Cloud Foundation support with HPE Synergy allowing customers to better build and operate their private cloud environments at scale. Small reads end up at around 50 MB/s, while local access is around 550-600 MB/s for 4k reads.HPE and VMware have been delivering innovative solutions for our joint customers to further their virtualization and hybrid cloud journeys. For the iSCSI the VM's disk was migrated to a zvol in the same zfs pool, mounted via iSCSI. The same datastore was used for all three. The same VM was mounted via NFSv3, NFSv4 and iSCSI, to obtain the results below. To measure the performance, I ran an ATTO disk benchmark in a Windows 10 VM. Send and receive buffers are tuned for 40gbe, but other than that, the tuning is pretty much default.

The datastore is a zfs pool of 24 vdevs consisting of mirrored pairs (48 10KRPM 6Gb/s SAS drives) in two D2700 enclosures, each with a single connection to an HP H241 controller. The network is IPv4 and jumbo frames using 40Gbe ConnectX-3 Pro cards connected to an Arista DCS-7050QX. The file server is Ubuntu 18.04 D元80 Gen8 providing storage to ESXI 6.7 DL 360 Gen9 hosts. So I'm reevaluating whether to pursue iSCSI. I ran a very simple benchmark, and I didn't expect it, but NFSv4 was faster than NFSv3 which was in turn faster than iSCSI (see below). I just started looking at migrating from NFS to iSCSI on a 40gbe network with jumbo frames.

I thought that switching from NFS to iSCSI would provide increase performance for datastores on ESXI.
