3PAR Users Group

A Storage Administrator Community

Post new topic Reply to topic  [ 14 posts ]  Go to page Previous  1, 2
Author Message
 Post subject: Re: HA (Cage) availability
PostPosted: Fri Apr 18, 2014 3:06 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1166
Location: Dallas, Texas
Andy wrote:
So in IMC I click on the CPG and then the Settings tab and see:

Requested cage
Current cage

So this means I should be able to loose an entire cage and not have data loss?


Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.

Reply with quote  
 Post subject: Re: HA (Cage) availability
PostPosted: Tue Apr 22, 2014 2:16 pm 

Joined: Thu Apr 19, 2012 9:29 am
Posts: 41
I agree with Cleanur, there may be instances where you dont really care about cage availabilty. I some instances I could care less about some snapshots as long as the parent is HA Cage. Some snaps we though use for synching production data so these are HA Cage.

So i guess it really depends on criticality of data vs optimized space management. The good thing tune the system without taking the systems offline or too much performance impact.


Reply with quote  
 Post subject: Re: HA (Cage) availability
PostPosted: Fri Dec 22, 2017 3:39 am 

Joined: Sat Jan 07, 2017 3:50 am
Posts: 17
Any good documentation available concerning cage HA and 20850 system. Calculator available to check what HW is needed in our systems for RAID5 (7+1). Since compaction on 3PAR is not that evident, wouldn't like to loose more netto capacity due to change in RAID protection (for ex. Change to RAID5 (3+1). RAID6 (best pracrice since 3.3.1) a better option ?
Other remark on cage HA. ..... After years (+30 years) beeing a storage admin for Hitachi , IBM,EMC hardware, I never had to worry for disk/cage/shelve protection. Even for the IBM XIV storage subsystems .... Loosing a module with 12 discs Was never an issue (of course this is raid1 and one can not compare with raid5). What can go wrong with a 3PAR enclosure ? isn't this static hardware ? What if all data is also remotely , synchronously protected ... Cage HA still that important ? Other vendors do have real active/active solutions what makes diskenclosure availability even less important.

Reply with quote  
 Post subject: Re: HA (Cage) availability
PostPosted: Fri Dec 22, 2017 5:48 am 

Joined: Wed Nov 19, 2014 5:14 am
Posts: 494
I believe it's documented in the HPE 3PAR concepts guide, a calculator is available to HPE and VAR's to assist you in sizing this.

Today it depends on the number of nodes involved, the raid type and stripe size as to the number of cages and disks required for cage HA.

2 nodes and 7+1 required 8 cages per node pair since you need to stripe vertically and can only lose one disk per stripe in raid 5 (multiple stripes per cage).
2 nodes and 14+2 requires the same 8 cages per node pair since you can lose two disks per stripe in raid 6. Although if there are more available it will use those as well.

Note the below table is per node pair.

cages.png [ 2.74 KiB | Viewed 428 times ]

Typically your disks would be added per node to match the stripe size meaning 2 nodes x 7+1 = 16 drives, there are exceptions if you are starting fairly small in which case you can make use of express layout which allows both nodes to share the chunklets across fewer disks.

For parity overhead calculations you just need to divide 100 by the stripe size

Raid 5 3+1 = 100/4 = 25%
Raid 6 6+2 = 100/8 = 12.5% x 2 parity = 25%
Raid 5 7+1 = 100/8 = 12.5%
Raid 6 14+2 = 100/16 = 6.25% x 2 parity = 12.5%

Note although you have less parity overhead on the larger stripes it means more cages required and larger upgrade increments in terms of disks in order to maintain cage HA across all capacity.

Port and cage level availability are above and beyond what most other systems can provide but it's not mandatory and you can have a mix of CPG's with different layouts dependent on the data value.

Personally I've never seen a hard failure of a cage but I have seen the case on a none 3PAR system where a firmware update left both I/O modules on an enclosure inoperable. I've also seen a single disk slot on a enclosure fail in another type of array that ultimately required downtime to replace the entire enclosure. Similarly I've seen customers want to remove cages (especially NL) or shuffle cages between racks. AFAIK that isn't possible even on the enterprise systems like HDS and VMAX but as you say XIV is effectively Raid 1 (x) so less of an issue, but a large capacity overhead is required for mirroring which doesn't tend to work for flash. If everything is mirrored to another site and you're DR plan is fairly slick and automated then it's probably less of a worry.

Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 14 posts ]  Go to page Previous  1, 2

Who is online

Users browsing this forum: No registered users and 4 guests

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt