HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 4 posts ] 
Author Message
 Post subject: 4Gpbs ceiling for single file operation on 3PAR range?
PostPosted: Thu Mar 15, 2018 11:57 am 

Joined: Thu Sep 27, 2012 12:15 pm
Posts: 6
First off, apologies if I'm not immediately able to relate all necessary information: storage is not my specialism, so it may take some to-ing and fro-ing to provide it :)

I'm interested in maximising the performance for a single large (~1TB) file read operation.

I have tried various CentOS7, Windows 2012 R2 and Windows 10 clients, connected with a single 8GB FC path to a dedicated 10T NL LUN via an HP StorageWorks 8GB SAN switch.

I've tried reading the file from a P10K with 8G FC ports but the maximum read speed I hit is around 3.2 Gbps.
If I start off another read from the same disk, I also get a 3.2Gbps read operation (as you might expect)
If I start off a third, all three drop to around 2.5Gbps and so on, so it's behaving as though the 8Gpbs bandwidth in this case, is available but that each file operation has a ceiling of 4Gbps.

The P10K has a 4Gbps shelf speed I believe, but the array has multiple shelves with the VVG comprised of disks across multiple shelves (as you'd expect) I think it's running Inform 3.1.2

I've tried the same configuration and test with an 8200, with the same result.
I don't see that it can be client hardware/firmware/software as I get the same result on everything I try.
I plan to fibre a host directly into one of the array ports, to try eliminating the SAN switch as that's a common factor)
I don't see why the 3PAR would be limited to 4G per operation, but maybe it is? I thought it worth comparing the 8200 just because of the newer technology.

If I use an internal NVMe as the read source on the same machines, I can get up to 20Gbps

Thanks for any advice
Darren


Top
 Profile  
Reply with quote  
 Post subject: Re: 4Gpbs ceiling for single file operation on 3PAR range?
PostPosted: Thu Mar 15, 2018 2:54 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
Not sure I see the question here but......

From my understanding the 3PAR was never designed for (or performed exceptionally well) on single sequential operations. This is the only scenario where I've found an EVA outperform a 3PAR and also why I don't think 3PAR is often used as a backup target. The 3PAR performs best in an environment where multiple host/streams are doing multiple random operations. Also remember that NL drives are not capable of providing a lot of IOps and due to large capacity you will usually not have many of those in the system which again doesn't help with performance.... NL drives are also usually configured with R6 and you are running a very old Inform OS... 3.1.3 or 3.2.1 added some new R6 code which improved performance during some workloads.

The 10k/V class has 2x 4Gbps connections to each cage if my memory serves me right and backend speed on drives are also 4Gbps, but if you have a 10k/V class with only one cage there has been a very skilled sales person in the picture. The node pair on 10k/V class could have up to 12x cages with 2x4Gbps per cages.... and with 4 node pairs you could have a rather high backend capacity (even by todays standards), but this is based on "old FC design" where you would add disk to increase performance rather than capacity, while today we are doing to opposite by mainly adding capacity and not a lot of extra performance in the world of SSD.

Going forward with NVMe (or basically anything flash based and local) you will almost never be able to get the same performance on an external array. NVMe and IO accellerators are directly connected to the PCI-e bus on your server, and trying to beat that is almost impossible.

Edit:GBps vs Gbps :)

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Last edited by MammaGutt on Thu Mar 15, 2018 5:57 pm, edited 1 time in total.

Top
 Profile  
Reply with quote  
 Post subject: Re: 4Gpbs ceiling for single file operation on 3PAR range?
PostPosted: Thu Mar 15, 2018 3:58 pm 

Joined: Thu Sep 27, 2012 12:15 pm
Posts: 6
Thanks so much for the quick reply.

You're right in that I don't think there actually was much of a question, more confirmation that it wasn't unexpected for the single sequential performance. While I couldn't see a physical reason for it (although there probably is one), I'd explained it away to myself by the fact the design is more likely to be optimised for high concurrency in the enterprise, with bulk small file IO than for maximum single ones: More of a coach than a motorbike.

It's very interesting to hear the EVA was more adept in this use-profile than the 10k. I didn't realise that. I have an NC/FC-equipped 20K on pallets in storage at the moment. It'll be interesting to see how that compares but I suspect there won't be a signficant difference for this kind of operation.

Inform version was a typo: I'd meant to type 3.2.1 :)


Top
 Profile  
Reply with quote  
 Post subject: Re: 4Gpbs ceiling for single file operation on 3PAR range?
PostPosted: Thu Mar 15, 2018 5:55 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
To be honest, if your starting point is 50 NL drives, you wouldn't be surprised if the MSA is faster than the 3PAR on a single stream from a single host. However if you do 20 streams from 20 hosts with a mixed workload I would be surprised if the MSA would be able to keep up. As you say, the 3PAR is a mid-range to enterprise storage system. One single stream isn't the first thing that pops into my head with that description.

I hope you got some SSDs with that 20k:). Those systems can be beasts but could easily be left at idle if there isn't enough performance in the back-end. I don't recall the numbers for the 20k, but the system can support some million IOPS.... if you consider that a 10k drive is "rated" at 150 IOPS your math skill doesn't need to be all that great to see that you're going to need a lot of those "150"s to reach 1.000.000 while SSDs might do something like 50.000 per SSD.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 4 posts ] 


Who is online

Users browsing this forum: Google [Bot] and 60 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt