SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440

Post Reply
msarro
Posts: 13
Joined: Mon Nov 06, 2017 12:53 pm

SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440

Post by msarro »

Hey everyone.

We are in the process of making some final config decisions for our 3PARs.

We are going to be using 24x 400GB multi-use 3 SSDs in a single shelf to start.

We are going to be using the CPG to host virtual infrastructure (management storage, VM data, snapshots, templates). All volumes will have remote copies stored on a second 3PAR at the site (synch for manager and vm data, asynch for the rest).

According to the best practice document, we should use the default configured CPGs:

Solid-state drive CPGs
Best practice: Solid-state drive (SSD) CPGs should be of the RAID 6 type with the default set size. This will bring superior performance/capacity
ratio and 7+1 on systems that have enough SSDs to support 7+1. If maximum performance is required, use RAID 1.

The default CPG config is 10+2 Raid 6, or 4+2 Raid 6. Is there a good reason NOT to use 22+2 Raid 6 instead? This should give us more spindles for better performance. The extra speed of the SSDs should limit our exposure to failure during rebuilds. More disks for many:one rebuilds should be a good thing.

I know traditional SANs with Raid 6, the blast zone of 24 disks is way too big. But since 3PAR is "different" I know that relying on traditional SAN designs often isn't the best idea.

Any tips would be very much appreciated.
MammaGutt
Posts: 1578
Joined: Mon Sep 21, 2015 2:11 pm
Location: Europe

Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440

Post by MammaGutt »

I don't think you can go that high. I think R5 7+1 and R6 14+2 is the biggest.

If I were you I'ld look at 10+2. Then you will use the 12SSDs "owned" by each node in each stripe. You could go higher and utilize express layout, but then you wouldn't get 2 full stripes per 1 IO to each SSD.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
msarro
Posts: 13
Joined: Mon Nov 06, 2017 12:53 pm

Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440

Post by msarro »

MammaGutt wrote:I don't think you can go that high. I think R5 7+1 and R6 14+2 is the biggest.

If I were you I'ld look at 10+2. Then you will use the 12SSDs "owned" by each node in each stripe. You could go higher and utilize express layout, but then you wouldn't get 2 full stripes per 1 IO to each SSD.


Thank you for the suggestion!

I am still a 3PAR novice - if I have a CPG with a set size of 10+2, that means I will only have a total amount of space available to a VV equal to 10*N, where N is the size of the disks, right?
MammaGutt
Posts: 1578
Joined: Mon Sep 21, 2015 2:11 pm
Location: Europe

Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440

Post by MammaGutt »

msarro wrote:
MammaGutt wrote:I don't think you can go that high. I think R5 7+1 and R6 14+2 is the biggest.

If I were you I'ld look at 10+2. Then you will use the 12SSDs "owned" by each node in each stripe. You could go higher and utilize express layout, but then you wouldn't get 2 full stripes per 1 IO to each SSD.


Thank you for the suggestion!

I am still a 3PAR novice - if I have a CPG with a set size of 10+2, that means I will only have a total amount of space available to a VV equal to 10*N, where N is the size of the disks, right?


Well almost. Some GBs go to logging, system reporter and other stuff. And you loose space to sparring (to handle failed drives until the drives are replaced). I think minimum sparring would be 10% with the 400GB drives.

Just curious, why not looking at 1.92TB drives or something like that? Price per GB would be lower, but you also have slightly lower performance...
Btw, are you looking at 2-node or 4-node 8440? If you're looking at 4-node my option might not be an option as you would need 6 or 8 SSDs per node pair. You also can't span a stripe across node pairs, so 4 nodes and 24 SSDs, 10+2 would be max with express layout.
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.
msarro
Posts: 13
Joined: Mon Nov 06, 2017 12:53 pm

Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440

Post by msarro »

MammaGutt wrote:
msarro wrote:
MammaGutt wrote:I don't think you can go that high. I think R5 7+1 and R6 14+2 is the biggest.

If I were you I'ld look at 10+2. Then you will use the 12SSDs "owned" by each node in each stripe. You could go higher and utilize express layout, but then you wouldn't get 2 full stripes per 1 IO to each SSD.


Thank you for the suggestion!

I am still a 3PAR novice - if I have a CPG with a set size of 10+2, that means I will only have a total amount of space available to a VV equal to 10*N, where N is the size of the disks, right?


Well almost. Some GBs go to logging, system reporter and other stuff. And you loose space to sparring (to handle failed drives until the drives are replaced). I think minimum sparring would be 10% with the 400GB drives.

Just curious, why not looking at 1.92TB drives or something like that? Price per GB would be lower, but you also have slightly lower performance...
Btw, are you looking at 2-node or 4-node 8440? If you're looking at 4-node my option might not be an option as you would need 6 or 8 SSDs per node pair. You also can't span a stripe across node pairs, so 4 nodes and 24 SSDs, 10+2 would be max with express layout.



Re: the 400G disks vs 1.92T - that was a Layer 8 decision, we were overruled. Short term savings at the expense of expansion expenses down the road :cry:

We are looking at the 2-node 8440.
JohnMH
Posts: 505
Joined: Wed Nov 19, 2014 5:14 am

Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440

Post by JohnMH »

The set size doesn't limit the number of drives you can address it's just related to protection overhead. With 10+2 each node will assemble chunklets and build LD's of that size at the back end so as a minimum this will be 2 x 10+2, however at the front end this will appear as contiguous space. So you can create a volume of any supported size and these will automatically be shared across LD's and so touch all drives and capacity across both nodes.

From a capacity perspective think of it like 20+4.
Post Reply