I was curious as to how many people are running VMFS6 datastores on 3.3.1 with the v3 dedupe volumes? I have been doing some testing on some all flash 7440s. So far I have created a new CPG and created new datastores using v3 dedupe. I cloned a couple VMs to the new datastores and let things settle out. BTW, I noticed the response times go up on the array when moving data with VAAI but I believe that has always been the case.
I deleted a VM and it seems that VMware runs the unmap command as expected in small intervals. About every 20 seconds I will see write response times spike and then come back down. This does seem to affect all VVs but I need to run more tests to verify that. Due to the fact we don't delete a ton of VMs all the time, this seems like it may work in production. Also, you can turn it off from the VMware side if you get halfway into your migration and see that performance is suffering.
Anyone else using VMFS6 with unmap yet?
VMFS6 on 3.3.1
Re: VMFS6 on 3.3.1
We will use VMFS6 for our next installation too, but I do not expect any big impact to the whole system because unmap is like sending zeroes to 3PAR, which should be handled directly by the ASIC and not written down to disk.
Or is this a wrong assumption? Does 3PAR have to write zeroes to delete dirty chunklets after "unmapping" them?
Is there anyone else with more knowledge about this here?
Or is this a wrong assumption? Does 3PAR have to write zeroes to delete dirty chunklets after "unmapping" them?
Is there anyone else with more knowledge about this here?
Re: VMFS6 on 3.3.1
3PAR doesn't have to write zeroes on to the back end inline, these are typically referenced by zero detect on ingest and then released asynchronously on the back end. But it will depend on the method VMware have implemented to signal the reclaim. Keep in mind VMware have to support this feature across a whole bunch of arrays not just 3PAR. Which can mean having to implement a more generic solution in order to support systems that implement the process differently. It could well be that latency is increased due to the extra work required at the host e.g. additional bandwidth being consumed for large reclaims or simply that the system is already being stressed. The fact the latency seems to show on writes is interesting since these should be hitting cache anyway and so should in theory remain largely unaffected from the array side.
Re: VMFS6 on 3.3.1
768kb wrote:We will use VMFS6 for our next installation too, but I do not expect any big impact to the whole system because unmap is like sending zeroes to 3PAR, which should be handled directly by the ASIC and not written down to disk.
Or is this a wrong assumption? Does 3PAR have to write zeroes to delete dirty chunklets after "unmapping" them?
Is there anyone else with more knowledge about this here?
The result should be the same, but the process is a little different between sending zeroes and sending SCSI UNMAP commands.
The UNMAP command is a SCSI OpCode (42h) so it does not need to be processed by the ASIC and sending the UNMAP command to the system should be a lot faster than sending zeroes as you are offloading the zeroing/block unmapping to the 3PAR system in the same way that other VAAI commands work.
If you want a deepdive into the UNMAP command you can read the SCSI Commands reference manual: https://www.seagate.com/files/staticfil ... 93068h.pdf
-
- Posts: 41
- Joined: Thu Jan 22, 2015 3:37 pm
Re: VMFS6 on 3.3.1
I think the main thing to know is when you run the unmap command on a VMFS5 volume, it unmaps the entire thing in one command. It can take several hours. During the unmap, performance will noticeably degrade. Your response times will be poor whether an asic is involved or not. This is why VMware stopped it (it also had a bad impact on other arrays as well). So far I have only done my testing on 7440s. The performance impact may be less on the 8k and 20k line but I haven't gotten to that point in testing yet. VMFS6 will now do it in short small commands and spread it out over several hours but ultimately, you won't have to manually do it anymore and it won't take the entire array down...or shouldn't...