HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 5 posts ] 
Author Message
 Post subject: CPG set size after adding enclosure
PostPosted: Thu Oct 19, 2017 11:24 am 

Joined: Wed Jul 06, 2016 11:31 pm
Posts: 17
My DR site have a 7400 2 node with 3 disk enclosure. The storage have occupied all the disk slots in a total of 96 SAS disks. Currently my CPG settings is HA Cage with set size 3+1

I'm going to add a new disk enclosure with additional 12 disks. Do I need to change the Set Size to 4+1 and perform tunesys?


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG set size after adding enclosure
PostPosted: Fri Oct 20, 2017 3:21 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
shang83 wrote:
My DR site have a 7400 2 node with 3 disk enclosure. The storage have occupied all the disk slots in a total of 96 SAS disks. Currently my CPG settings is HA Cage with set size 3+1

I'm going to add a new disk enclosure with additional 12 disks. Do I need to change the Set Size to 4+1 and perform tunesys?


You should not change your set size. If you change to 4+1 with HA Cage, you will only be able to create 12 stripes as you only have 12 disks in the final cage. This will prevent your from accessing half of your capacity in the first 4 cages.

You should run tunesys to spread the old data across the new drives to balance the load.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG set size after adding enclosure
PostPosted: Tue Oct 24, 2017 1:02 am 

Joined: Wed Jul 06, 2016 11:31 pm
Posts: 17
MammaGutt wrote:
shang83 wrote:
My DR site have a 7400 2 node with 3 disk enclosure. The storage have occupied all the disk slots in a total of 96 SAS disks. Currently my CPG settings is HA Cage with set size 3+1

I'm going to add a new disk enclosure with additional 12 disks. Do I need to change the Set Size to 4+1 and perform tunesys?


You should not change your set size. If you change to 4+1 with HA Cage, you will only be able to create 12 stripes as you only have 12 disks in the final cage. This will prevent your from accessing half of your capacity in the first 4 cages.

You should run tunesys to spread the old data across the new drives to balance the load.


If the CPG configured with HA cage, should it be if I have 1 controller with 4 disk enclosure, need to configure the set size to 4+1?

Or the set size configuration is only for the Disk drives.


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG set size after adding enclosure
PostPosted: Tue Oct 24, 2017 3:09 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
Set size is actually on chunklet level.

If you change to 4+1 and only add half the number of disks in the 5th cage, the system will be unable to allocate more space once those 12 drives are full because you are unable to find free chunklets in 5 different cages.

The 3PAR Storage Concepts Guide has a somewhat decent explanation under Chunklets and Logical Disks.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG set size after adding enclosure
PostPosted: Wed Oct 25, 2017 12:40 am 

Joined: Fri Jun 27, 2014 2:01 am
Posts: 390
It's so a pity that 96 + 12 can't be entirely divided by (5 * 2)...
Else you could have moved some pairs of disks from the first 4 cages to the fifth one to get the same count of disk per cage...
14 new disks rather than 12 would have fit perfectly.

We usually proceed as follow:
move disk from cage 0 and even slot (slot 22 for example) to cage 4 even slot (14)
run showpd -degraded, only 1 drive is degraded
wait for 40 seconds to 1 minutes
run showpd -degraded, if it's still degraded wait a little bit more, if no disk is degraded then next step
move disk from cage 0 and odd slot (slot 23 for example) to cage 4 even slot (15)
And so on...

Once you get the same even count of disk per cage, we run tunesys -f -chunkpct 1 -nodepct 1 -maxtasks 2 (or more if the array is sleeping).

On a more than 2 nodes you have to be carefull to the target cage. Drives can be moved only between the cages attached to the same nodes pairs. On recent inform os (tested above 3.2.1), nothing bad will appen if you try to move disks between different nodes pairs : the disk stay degraded and if you run admithw you got a return about disk that can't be admitted.

So far it's not really supported by HPE... They prefer to sell bunch of disks :D


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 5 posts ] 


Who is online

Users browsing this forum: Google [Bot], Majestic-12 [Bot] and 39 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt