HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 10 posts ] 
Author Message
 Post subject: Queue Stats?
PostPosted: Fri Feb 27, 2015 2:02 pm 

Joined: Fri Feb 27, 2015 2:00 pm
Posts: 5
What's the WrtSched mean? Trying to narrow down an elusive performance problem. Curious if this looks normal or not...

Queue Statistics
Node Free Clean Write1 WriteN WrtSched Writing DcowPend DcowProc
0 31304 383102 125 222 1528 51 0 0
1 29283 383570 228 64 1067 0 0 0
2 31074 382624 397 74 1132 16 0 0
3 29687 380824 1781 136 1174 88 0 64


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 2:55 pm 

Joined: Thu Dec 06, 2012 1:25 pm
Posts: 138
Those are cache pages that are scheduled to be written to disk.

Care to share what your problem is? (and how does the block of data below this one look like)

_________________
The goal is to achieve the best results by following the clients wishes. If they want to have a house build upside down standing on its chimney, it's up to you to figure out how do it, while still making it usable.


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 2:59 pm 

Joined: Fri Feb 27, 2015 2:00 pm
Posts: 5
Thanks! and sure:

Basically - VMware snapshots are taking hours to commit (normally took 15 minutes). Doesn't appear to be any contention on the VMware side of the house and the link utilization is relatively low (FCoE).

Queue Statistics
Node Free Clean Write1 WriteN WrtSched Writing DcowPend DcowProc
0 29777 381275 419 184 1194 37 0 0
1 29262 381185 441 90 907 0 0 0
2 30311 378544 357 91 1062 20 0 0
3 29438 379499 1974 10 1338 0 0 64

Temporary and Page Credits
Node Node0 Node1 Node2 Node3 Node4 Node5 Node6 Node7
0 0 18105 17016 17155 --- --- --- ---
1 18613 0 17308 17495 --- --- --- ---
2 18630 17356 1 18889 --- --- --- ---
3 16377 17733 18864 4 --- --- --- ---

Page Statistics
---------CfcDirty--------- ----------CfcMax---------- ----------DelAck----------
Node FC_10KRPM FC_15KRPM NL SSD FC_10KRPM FC_15KRPM NL SSD FC_10KRPM FC_15KRPM NL SSD
0 1834 0 0 0 115200 0 0 0 3296689 0 0 0
1 1438 0 0 0 115200 0 0 0 350292 0 0 0
2 1530 0 0 0 115200 0 0 0 201338 0 0 0
3 2367 0 0 0 115200 0 0 0 2332 0 0 0


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 3:29 pm 

Joined: Thu Dec 06, 2012 1:25 pm
Posts: 138
hmm, Quite an uneven spread of delacks over the nodes. Delack counter of node-0 seems to be on the large side as well (allthough i do not know the uptime of the system).
Data written to the FC-tier in each node is not that bad though.

I wouldn't expect a massive write latency, if i see these numbers.

how does the statvlun -hostsum -rw -ni look like? are IOs to certain hosts exceptionally slow?
and to get a feeling how much the system is doing. what is the amount IOs the system is handling during the snap clean up? and at what bandwidth, with what response time?

I see the system is a 7400, still running 3.1.2. How many FC drives are installed?

_________________
The goal is to achieve the best results by following the clients wishes. If they want to have a house build upside down standing on its chimney, it's up to you to figure out how do it, while still making it usable.


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 3:45 pm 

Joined: Fri Feb 27, 2015 2:00 pm
Posts: 5
Yeah that's why I just can't tell where the bottleneck is. Wish I could use a fixed width font!

192 10K drives

5:41:35 02/27/2015 r/w I/O per second KBytes per sec Svt ms IOSz KB
Hostname Cur Avg Max Cur Avg Max Cur Avg Cur Avg Qlen
8_FC r 215 215 215 1624 1624 1624 0.37 0.37 7.6 7.6 -
8_FC w 547 547 547 12289 12289 12289 2.95 2.95 22.5 22.5 -
8_FC t 762 762 762 13913 13913 13913 2.22 2.22 18.3 18.3 1
4_FC r 485 485 485 9015 9015 9015 2.38 2.38 18.6 18.6 -
4_FC w 200 200 200 6498 6498 6498 1.71 1.71 32.4 32.4 -
4_FC t 685 685 685 15513 15513 15513 2.18 2.18 22.6 22.6 1
2_FC r 287 287 287 3493 3493 3493 0.79 0.79 12.2 12.2 -
2_FC w 353 353 353 1516 1516 1516 1.40 1.40 4.3 4.3 -
2_FC t 640 640 640 5009 5009 5009 1.13 1.13 7.8 7.8 0
5_FC r 413 413 413 6166 6166 6166 6.65 6.65 14.9 14.9 -
5_FC w 210 210 210 1077 1077 1077 2.28 2.28 5.1 5.1 -
5_FC t 623 623 623 7243 7243 7243 5.18 5.18 11.6 11.6 4
6_FC r 735 735 735 16404 16404 16404 7.65 7.65 22.3 22.3 -
6_FC w 373 373 373 6873 6873 6873 1.43 1.43 18.4 18.4 -
6_FC t 1109 1109 1109 23278 23278 23278 5.55 5.55 21.0 21.0 3
7_FC r 12 12 12 47 47 47 1.36 1.36 3.8 3.8 -
7_FC w 74 74 74 532 532 532 1.16 1.16 7.2 7.2 -
7_FC t 87 87 87 579 579 579 1.18 1.18 6.7 6.7 1
3_FC r 749 749 749 4050 4050 4050 0.91 0.91 5.4 5.4 -
3_FC w 235 235 235 1482 1482 1482 1.56 1.56 6.3 6.3 -
3_FC t 985 985 985 5532 5532 5532 1.07 1.07 5.6 5.6 1
1_FC r 177 177 177 13094 13094 13094 5.68 5.68 73.9 73.9 -
1_FC w 224 224 224 1194 1194 1194 1.47 1.47 5.3 5.3 -
1_FC t 401 401 401 14289 14289 14289 3.33 3.33 35.6 35.6 1
--------------------------------------------------------------------------------
9 r 3076 3076 53900 53900 3.75 3.75 17.5 17.5 -
9 w 2218 2218 31466 31466 1.91 1.91 14.2 14.2 -
9 t 5293 5293 85366 85366 2.98 2.98 16.1 16.1 12


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 3:48 pm 

Joined: Fri Feb 27, 2015 2:00 pm
Posts: 5
I wouldn't say they are exceptionally slow - 2-8ms latency. Lots of really small IOs, so throughput is actually under 100Mb/s (yeah megabit). But the host running the snapshot clean up is pushing ~2,000 IOPS.


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 4:30 pm 

Joined: Thu Dec 06, 2012 1:25 pm
Posts: 138
Code:
5:41:35 02/27/2015 r/w I/O per second    KBytes per sec    Svt ms   IOSz KB
          Hostname      Cur  Avg  Max   Cur   Avg   Max  Cur  Avg  Cur  Avg Qlen
  8_FC   r  215  215  215  1624  1624  1624 0.37 0.37  7.6  7.6    -
  8_FC   w  547  547  547 12289 12289 12289 2.95 2.95 22.5 22.5    -
  8_FC   t  762  762  762 13913 13913 13913 2.22 2.22 18.3 18.3    1
  4_FC   r  485  485  485  9015  9015  9015 2.38 2.38 18.6 18.6    -
  4_FC   w  200  200  200  6498  6498  6498 1.71 1.71 32.4 32.4    -
  4_FC   t  685  685  685 15513 15513 15513 2.18 2.18 22.6 22.6    1
  2_FC   r  287  287  287  3493  3493  3493 0.79 0.79 12.2 12.2    -
  2_FC   w  353  353  353  1516  1516  1516 1.40 1.40  4.3  4.3    -
  2_FC   t  640  640  640  5009  5009  5009 1.13 1.13  7.8  7.8    0
  5_FC   r  413  413  413  6166  6166  6166 6.65 6.65 14.9 14.9    -
  5_FC   w  210  210  210  1077  1077  1077 2.28 2.28  5.1  5.1    -
  5_FC   t  623  623  623  7243  7243  7243 5.18 5.18 11.6 11.6    4
  6_FC   r  735  735  735 16404 16404 16404 7.65 7.65 22.3 22.3    -
  6_FC   w  373  373  373  6873  6873  6873 1.43 1.43 18.4 18.4    -
  6_FC   t 1109 1109 1109 23278 23278 23278 5.55 5.55 21.0 21.0    3
  7_FC   r   12   12   12    47    47    47 1.36 1.36  3.8  3.8    -
  7_FC   w   74   74   74   532   532   532 1.16 1.16  7.2  7.2    -
  7_FC   t   87   87   87   579   579   579 1.18 1.18  6.7  6.7    1
  3_FC   r  749  749  749  4050  4050  4050 0.91 0.91  5.4  5.4    -
  3_FC   w  235  235  235  1482  1482  1482 1.56 1.56  6.3  6.3    -
  3_FC   t  985  985  985  5532  5532  5532 1.07 1.07  5.6  5.6    1
  1_FC   r  177  177  177 13094 13094 13094 5.68 5.68 73.9 73.9    -
  1_FC   w  224  224  224  1194  1194  1194 1.47 1.47  5.3  5.3    -
  1_FC   t  401  401  401 14289 14289 14289 3.33 3.33 35.6 35.6    1
--------------------------------------------------------------------------------
                 9   r 3076 3076      53900 53900       3.75 3.75 17.5 17.5    -
                 9   w 2218 2218      31466 31466       1.91 1.91 14.2 14.2    -
                 9   t 5293 5293      85366 85366       2.98 2.98 16.1 16.1   12

_________________
The goal is to achieve the best results by following the clients wishes. If they want to have a house build upside down standing on its chimney, it's up to you to figure out how do it, while still making it usable.


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 4:31 pm 

Joined: Thu Dec 06, 2012 1:25 pm
Posts: 138
Queue seems awfully low.
Response times are great for a system with no SSD.
what did you configure as a max queue depth in VMware?

_________________
The goal is to achieve the best results by following the clients wishes. If they want to have a house build upside down standing on its chimney, it's up to you to figure out how do it, while still making it usable.


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 5:59 pm 

Joined: Fri Feb 27, 2015 2:00 pm
Posts: 5
On the host deleting the snapshot I tried 256.


Top
 Profile  
Reply with quote  
 Post subject: Re: Queue Stats?
PostPosted: Fri Feb 27, 2015 6:12 pm 

Joined: Thu Dec 06, 2012 1:25 pm
Posts: 138
hmm, out of ideas for the moment (might be because of the IPA i've been drinking).

I'd say it is not the 3PAR.
You'll have to dig into the VMware settings, check VMware side latency, cpu load, and what not.
Depending on VMware version it might be doing some dynamic throtteling as well.

Might be another forum member can guide you through the next steps, but you could start with checking the 3PAR host implementation guide for VMware, to double check you are running according to adviced best practices.

_________________
The goal is to achieve the best results by following the clients wishes. If they want to have a house build upside down standing on its chimney, it's up to you to figure out how do it, while still making it usable.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 10 posts ] 


Who is online

Users browsing this forum: No registered users and 236 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt