HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 2 posts ] 
Author Message
 Post subject: esxi 5.1 or 5.5 queue depth settings
PostPosted: Mon Jun 23, 2014 12:23 pm 

Joined: Thu Mar 21, 2013 11:38 am
Posts: 166
I have a V400 3par array running with esxi 5.1 hosts, soon to be upgrading to 5.5. I am curious what queue depth settings you run and why? I currently have my AQLEN for the adapters as 2176, and the DQLEN for the device set at 64.


Top
 Profile  
Reply with quote  
 Post subject: Re: esxi 5.1 or 5.5 queue depth settings
PostPosted: Tue Jun 24, 2014 5:28 am 

Joined: Sun Jul 29, 2012 9:30 am
Posts: 576
We have experimented with a few different settings and basically found no much difference for normal workloads. We recently had SQL performance issues and to prove it was not the HBA bottle next raised the DQLEN to 256 and the performance did not move. Prior to 5.5 the vmware Disk.SchedNumReqOutstanding setting has an impact if more than 1 VM shares a datastore/volume, but evidently that setting is going away in 5.5. Prior to 5.5 I believe that value default was 32 which means if more than one VM is on a datastore then each VM would be throttled to 32 queue depth to prevent a VM from over running the HBA. In our environment we have not seen any impact on change any of these settings since none of our system drive so much IO as to overwhelm the HBA or the array.

There are numerous vm blogs and articles that talk about tweaking HBAs, but I have not had much time to read the latest info on 5.5 since it change things yet again.

Some optimizations we do across the board to help all VMs;

- we use vmxnet3 NIC drives on all guests that support it without issues, usually Linux and 2008 R2+. Also just using vmxnet3 alone provides very little improvement if you do not go into the guest and turn on TCP offload and such, we have seen huge gains on network heavy apps.

- we use paravirtual SCSI adapters on ALL guests Win 2003 + and Lunix. For additional optimizations on SQL boxes we mount individual vmdks for DATA, LOG and TEMPDB each on their own logical SCSI adapter. To me this is more important than tweaking HBA settings as you are letting the guest OS better manage IOs

- we moved away from RDMs since with the paravirtual SCSI adapter and advances in tweaking in vsphere the performance gains of RDMs are negligible in most cases today.

- We disable all power settings on the esxi hosts and also int he Windows guests

- each esxi host has either local storage or a LUN to use for guest swap files (we do not store them with the VMs). This does increase vmotion times a bit, but well worth it from a management standpoint.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 2 posts ] 


Who is online

Users browsing this forum: Google [Bot] and 165 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt