HPE Storage Users Group
https://3parug.com/

HA (Cage) availability
https://3parug.com/viewtopic.php?f=18&t=652
Page 2 of 2

Author:  Richard Siemers [ Fri Apr 18, 2014 3:06 pm ]
Post subject:  Re: HA (Cage) availability

Andy wrote:
So in IMC I click on the CPG and then the Settings tab and see:

Availability
Requested cage
Current cage

So this means I should be able to loose an entire cage and not have data loss?


Correct.

Author:  rotorhead1 [ Tue Apr 22, 2014 2:16 pm ]
Post subject:  Re: HA (Cage) availability

I agree with Cleanur, there may be instances where you dont really care about cage availabilty. I some instances I could care less about some snapshots as long as the parent is HA Cage. Some snaps we though use for synching production data so these are HA Cage.

So i guess it really depends on criticality of data vs optimized space management. The good thing tune the system without taking the systems offline or too much performance impact.

jd

Author:  Marc.mvh.vanhoof [ Fri Dec 22, 2017 3:39 am ]
Post subject:  Re: HA (Cage) availability

Any good documentation available concerning cage HA and 20850 system. Calculator available to check what HW is needed in our systems for RAID5 (7+1). Since compaction on 3PAR is not that evident, wouldn't like to loose more netto capacity due to change in RAID protection (for ex. Change to RAID5 (3+1). RAID6 (best pracrice since 3.3.1) a better option ?
Other remark on cage HA. ..... After years (+30 years) beeing a storage admin for Hitachi , IBM,EMC hardware, I never had to worry for disk/cage/shelve protection. Even for the IBM XIV storage subsystems .... Loosing a module with 12 discs Was never an issue (of course this is raid1 and one can not compare with raid5). What can go wrong with a 3PAR enclosure ? isn't this static hardware ? What if all data is also remotely , synchronously protected ... Cage HA still that important ? Other vendors do have real active/active solutions what makes diskenclosure availability even less important.

Author:  JohnMH [ Fri Dec 22, 2017 5:48 am ]
Post subject:  Re: HA (Cage) availability

I believe it's documented in the HPE 3PAR concepts guide, a calculator is available to HPE and VAR's to assist you in sizing this.

Today it depends on the number of nodes involved, the raid type and stripe size as to the number of cages and disks required for cage HA.

e.g.
2 nodes and 7+1 required 8 cages per node pair since you need to stripe vertically and can only lose one disk per stripe in raid 5 (multiple stripes per cage).
2 nodes and 14+2 requires the same 8 cages per node pair since you can lose two disks per stripe in raid 6. Although if there are more available it will use those as well.

Note the below table is per node pair.

Attachment:
cages.png
cages.png [ 2.74 KiB | Viewed 9159 times ]


Typically your disks would be added per node to match the stripe size meaning 2 nodes x 7+1 = 16 drives, there are exceptions if you are starting fairly small in which case you can make use of express layout which allows both nodes to share the chunklets across fewer disks.

For parity overhead calculations you just need to divide 100 by the stripe size

e.g.
Raid 5 3+1 = 100/4 = 25%
Raid 6 6+2 = 100/8 = 12.5% x 2 parity = 25%
Raid 5 7+1 = 100/8 = 12.5%
Raid 6 14+2 = 100/16 = 6.25% x 2 parity = 12.5%

Note although you have less parity overhead on the larger stripes it means more cages required and larger upgrade increments in terms of disks in order to maintain cage HA across all capacity.

Port and cage level availability are above and beyond what most other systems can provide but it's not mandatory and you can have a mix of CPG's with different layouts dependent on the data value.

Personally I've never seen a hard failure of a cage but I have seen the case on a none 3PAR system where a firmware update left both I/O modules on an enclosure inoperable. I've also seen a single disk slot on a enclosure fail in another type of array that ultimately required downtime to replace the entire enclosure. Similarly I've seen customers want to remove cages (especially NL) or shuffle cages between racks. AFAIK that isn't possible even on the enterprise systems like HDS and VMAX but as you say XIV is effectively Raid 1 (x) so less of an issue, but a large capacity overhead is required for mirroring which doesn't tend to work for flash. If everything is mirrored to another site and you're DR plan is fairly slick and automated then it's probably less of a worry.

Page 2 of 2 All times are UTC - 5 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/