HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 3 posts ] 
Author Message
 Post subject: service time/latency difference
PostPosted: Wed Feb 18, 2015 4:06 pm 

Joined: Thu May 08, 2014 4:43 pm
Posts: 62
4 node, 8 host ports, 8 disk ports, 8 cage 7400, mostly FC ( 160 ) with some SSD for a handful of AO LUNs.

statpd -d 5 -p -devtype FC
Noticed during some benchmarks for some new software that disk latency was 8ms - 10ms ( Cur and Avg, total ). Now it's 11ms - 13ms.

statvlun -vvsum -ni -rw -d 5
Looking at all LUN's ( well, all active LUNs ) for total, it's only 1.65ms to 1.25ms .

statport -rw -ni -disk -d 5
Also, looking at the ports I see 9.01ms - 6.16ms ( Cur - Avg for total ).

Finally, looking at the subset of LUNs doing the actual benchmark, they are 3.39ms to 1.92ms and floated up to 6ms and even 10ms and a little higher at times.

All that to ask: when looking at all PDs, all VVs and all Ports, why aren't they a bit closer together in their service times?

Any cause for concern when PDs ( or VVs or Ports ) get over a certain time?


Top
 Profile  
Reply with quote  
 Post subject: Re: service time/latency difference
PostPosted: Thu Feb 19, 2015 2:11 am 

Joined: Wed May 07, 2014 1:51 am
Posts: 267
Hi,

as far as I know,

PD-times show the bare metal service times from the discs, which means all IOPS covered by cache are not included here.

VV-Service-time is local storage-view (not sure about what's included here or not)

vlun-servicetime is closer to what the server sees, including time spent for mirroring, acknowledging IOPS from the server, maybe retransmits because of san-problems etc.

port-servicetimes: not sure where this fiits...

_________________
When all else fails, read the instructions.


Top
 Profile  
Reply with quote  
 Post subject: Re: service time/latency difference
PostPosted: Thu Feb 19, 2015 7:03 am 

Joined: Wed Nov 19, 2014 5:14 am
Posts: 505
As above

Statvlun should provide a very similar view to the host, statport -host should provide similar albeit an aggregate view, so look at these as front end which benefit from cache, anything else is backend so you'll typically see higher latencies.

Statpd shows how heavily loaded the backend disks are taking into account the write amplification required for raid. In your case with statport you're looking at all I/O traversing the backend disk ports inc raid overheads.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 


Who is online

Users browsing this forum: No registered users and 199 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt