HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 5 posts ] 
Author Message
 Post subject: VMFS6 on 3.3.1
PostPosted: Tue Jan 02, 2018 11:33 am 

Joined: Thu Jan 22, 2015 3:37 pm
Posts: 41
I was curious as to how many people are running VMFS6 datastores on 3.3.1 with the v3 dedupe volumes? I have been doing some testing on some all flash 7440s. So far I have created a new CPG and created new datastores using v3 dedupe. I cloned a couple VMs to the new datastores and let things settle out. BTW, I noticed the response times go up on the array when moving data with VAAI but I believe that has always been the case.

I deleted a VM and it seems that VMware runs the unmap command as expected in small intervals. About every 20 seconds I will see write response times spike and then come back down. This does seem to affect all VVs but I need to run more tests to verify that. Due to the fact we don't delete a ton of VMs all the time, this seems like it may work in production. Also, you can turn it off from the VMware side if you get halfway into your migration and see that performance is suffering.

Anyone else using VMFS6 with unmap yet?


Top
 Profile  
Reply with quote  
 Post subject: Re: VMFS6 on 3.3.1
PostPosted: Wed Jan 03, 2018 11:07 am 

Joined: Mon Apr 24, 2017 2:46 pm
Posts: 5
We will use VMFS6 for our next installation too, but I do not expect any big impact to the whole system because unmap is like sending zeroes to 3PAR, which should be handled directly by the ASIC and not written down to disk.

Or is this a wrong assumption? Does 3PAR have to write zeroes to delete dirty chunklets after "unmapping" them?

Is there anyone else with more knowledge about this here?


Top
 Profile  
Reply with quote  
 Post subject: Re: VMFS6 on 3.3.1
PostPosted: Wed Jan 03, 2018 12:20 pm 

Joined: Wed Nov 19, 2014 5:14 am
Posts: 505
3PAR doesn't have to write zeroes on to the back end inline, these are typically referenced by zero detect on ingest and then released asynchronously on the back end. But it will depend on the method VMware have implemented to signal the reclaim. Keep in mind VMware have to support this feature across a whole bunch of arrays not just 3PAR. Which can mean having to implement a more generic solution in order to support systems that implement the process differently. It could well be that latency is increased due to the extra work required at the host e.g. additional bandwidth being consumed for large reclaims or simply that the system is already being stressed. The fact the latency seems to show on writes is interesting since these should be hitting cache anyway and so should in theory remain largely unaffected from the array side.


Top
 Profile  
Reply with quote  
 Post subject: Re: VMFS6 on 3.3.1
PostPosted: Thu Jan 04, 2018 7:09 am 

Joined: Wed Dec 04, 2013 6:14 am
Posts: 15
768kb wrote:
We will use VMFS6 for our next installation too, but I do not expect any big impact to the whole system because unmap is like sending zeroes to 3PAR, which should be handled directly by the ASIC and not written down to disk.

Or is this a wrong assumption? Does 3PAR have to write zeroes to delete dirty chunklets after "unmapping" them?

Is there anyone else with more knowledge about this here?


The result should be the same, but the process is a little different between sending zeroes and sending SCSI UNMAP commands.

The UNMAP command is a SCSI OpCode (42h) so it does not need to be processed by the ASIC and sending the UNMAP command to the system should be a lot faster than sending zeroes as you are offloading the zeroing/block unmapping to the 3PAR system in the same way that other VAAI commands work.

If you want a deepdive into the UNMAP command you can read the SCSI Commands reference manual: https://www.seagate.com/files/staticfil ... 93068h.pdf


Top
 Profile  
Reply with quote  
 Post subject: Re: VMFS6 on 3.3.1
PostPosted: Thu Jan 04, 2018 9:11 am 

Joined: Thu Jan 22, 2015 3:37 pm
Posts: 41
I think the main thing to know is when you run the unmap command on a VMFS5 volume, it unmaps the entire thing in one command. It can take several hours. During the unmap, performance will noticeably degrade. Your response times will be poor whether an asic is involved or not. This is why VMware stopped it (it also had a bad impact on other arrays as well). So far I have only done my testing on 7440s. The performance impact may be less on the 8k and 20k line but I haven't gotten to that point in testing yet. VMFS6 will now do it in short small commands and spread it out over several hours but ultimately, you won't have to manually do it anymore and it won't take the entire array down...or shouldn't...


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 5 posts ] 


Who is online

Users browsing this forum: Google [Bot] and 49 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt