HPE Storage Users Group
https://3parug.com/

add and remove tier 3 to AO
https://3parug.com/viewtopic.php?f=18&t=3223
Page 1 of 1

Author:  godfather007 [ Sun Jun 23, 2019 11:35 pm ]
Post subject:  add and remove tier 3 to AO

Hi,

we have a simple setup 8200-series with:
10 ssd 2T
36 hdd 1.8T


There has been pre-created a balanced AO with 2 CPG's:
Tier 0 SSD 8+2 RAID6
Tier 1 HDD 6+2 RAID6

Out of space curiousity, i'm thinking to create a slower 14+2 RAID6 and add it as Tier 2.

Question 1:
Is that smart, can i afterwards remove Tier 2 from the AO if it does not satisfy my needs/underperforms? Does AO move those regions back to Tier 0+1?
Question 2:
Should i put a growth limit on Tier 1 to make "estimated" space larger?
Question 3:
Does AO (if given enough time) automatically move hot/cold blocks throughout those CPG's? Or does it mainly keep them in faster areas?


Another Question:
Can i adjust a CPG's disk stripe from 6+2 to 14+2 or change it back on the fly?


Answer 1:
I found setting the growth level to 1 MB will free the CPG @ Tier 2. After AO-movement(s) it will shrink and can be removed?

Thanks a lot.

Martijn

Author:  MammaGutt [ Mon Jun 24, 2019 10:40 am ]
Post subject:  Re: add and remove tier 3 to AO

What makes you believe stripe size (14+2 vs 6+2) has any impact on performance?

And as a size note, it is a big no-no to have the same physical disks in different tiers in an AO config.

Author:  godfather007 [ Tue Jun 25, 2019 1:18 am ]
Post subject:  Re: add and remove tier 3 to AO

6+2 and 6+2 and 6+2 and 6+2 (36) can act parrallel as 4 raid0 striped raid 6 sets, more ideal for writing. But writing is done on the SSD cpg anyway. Actually there are 24 simultaneous reading drives here.

14+2 and 14+2 (32) is indeed on the same physical disks which is not ideal. 28 reading drives, actually reads faster but writes slower as config above.

I think you made a nice point here. I'm not going to play with it.

Martijn

Author:  MammaGutt [ Tue Jun 25, 2019 4:50 am ]
Post subject:  Re: add and remove tier 3 to AO

godfather007 wrote:
6+2 and 6+2 and 6+2 and 6+2 (36) can act parrallel as 4 raid0 striped raid 6 sets, more ideal for writing. But writing is done on the SSD cpg anyway. Actually there are 24 simultaneous reading drives here.

14+2 and 14+2 (32) is indeed on the same physical disks which is not ideal. 28 reading drives, actually reads faster but writes slower as config above.

I think you made a nice point here. I'm not going to play with it.

Martijn


Yes, 6+2 (x4) will align with the physical number of disks so you write full sets so for sequential write there might be a difference if that's all you do. But when you start doing random reads or overwrite existing blocks it all changes as you will read/write from the physical drives which contain the data. On the 3PAR it isn't as simple as saying that disk 0 to 7 is always used in the same 6+2 set because it randomizes the chunklets when wide-striping to prevent hot spots.

I can fully understand the wish to increase set size to improve capacity efficiency and you can simply do that by changing the current set size on the existing CPG and run tunesys. Just monitor the progress so you don't run out of space while it is converting. If it starts getting full, just cancel the tunesys job, run compact cpg and restart tunesys. Just remember that bigger set size = bigger rebuild after disk failure.

Page 1 of 1 All times are UTC - 5 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/