MammaGutt wrote:
So, based on your attached spreadsheet, the volumes with the hottest blocks aren't big... and again the total IOPS shouldn't be that much.
Of course it will make a difference but if you are hitting the high watermark for FC-10k drives today it will not be enough. Not sure which report it is from but the volumes wit highest avg-tot are small volumes, and I'm guessing the numbers are acc/(GiB*min) so it is active blocks but in total it just doesn't add up. Not the best example but hopefully it shows my point.
VDIDatastore3 is ~550GB and average ~82 acc/(GiB*min). Adding this up becomes 45100 acc/min.
SQL-PRD-TEMP is ~10GB and average ~370 acc/(GiB*min). Adding this up becomes 3700 acc/min.
Even if SQL-PRD-TEMP have hotter blocks (5x hotter), it produces less than 10% of the IOPS than the big VDIDatastore3 volume.
Taking this also a bit further... Minimum disk per tier is 6 for SSD. BP states that the smallest disk shouldn't be less than 50% or the biggest disk (I tend to ignore this as long as I have at least one full stripe for each size and in a best case scenario have equal number of small and large SSDs). So if you follow BP and create a new CPG, you are 2 SSDs short of being supported with 4x 920GB. Also, you will be allowed to configure R5 3+1 but if you intend to use this for anything other than AO, you are at risk of hitting this:
https://support.hpe.com/hpsc/doc/public ... cale=en_USSo then you're down to R5 2+1 or R1... Either way you're not going to get a lot of capacity out of those 4x 920GB SSDs... I'm sorry, but if you want a good advise you need to go back to whomever ordered those 4x 920GB and say that you need 4 more for this to actually provide any real change because with only 4 drives you will have a difficult time even creating a supported configuration and you will lose such a big percentage of the actual capacity added it has a terrible "bang for the buck". 8x 400/480GB would have been a HUGE difference at not a big price difference.....
As for those that installed your 3PAR... Did you check "AdmissionTime" on showpd -i ? I'm just taking a stab here but it could be that the system has been expanded over time and only had one cage when the SSDs was installed.....
Starting to make more sense to me now.. Thanks so much for breaking this all down..
So what is the best way to try and figure out if it's a problem VM, a problem VV, etc? From what you're saying I should be multiplying the VV space used by the Average Total Acc/Min which will give me the total ACC/Min for each VV..
I'm assuming ACC/Min is equivalent to I/O per seconds * 60?
I've attached another spreadsheet.. It has 3 tabs where I ran
srrgiodensity -btsecs -1d -cpg * -rw -withvv -pct each morning.. I added two columns at the end (SpaceGB and Total Acc/Min)..
So am I looking at this right in that the total Acc/Min (space used * average total I/O acc-min) is the number to be concerned about?
One other question.. NinjaStars shows that the 28 SAS drives in R5 can do around 3,000 IOPS.. So is it as simple as saying the 3PAR can do 3,000 * 60 = 180,000 ACC/MIN? And if I take my spreadsheet and add up the entire last column which is the Acc/Min for each VV, I would get the total amount of I/O vs what the SAN is actually capable of?
I guess that can't be right though.. Because that number is 315,000 ACC/MIN..
Appreciate your time mate.. Learning a ton.
EDIT TO ADD: Update.. The order hasn't been placed yet.. I just got a quote for (8) x 480GB SSDs for $13.7K compared to the $11.8K..
Attachment:
srrgiodensity-3par.xlsx [32.46 KiB]
Downloaded 1063 times