3PAR Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 6 posts ] 
Author Message
 Post subject: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440
PostPosted: Mon Nov 06, 2017 1:09 pm 

Joined: Mon Nov 06, 2017 12:53 pm
Posts: 3
Hey everyone.

We are in the process of making some final config decisions for our 3PARs.

We are going to be using 24x 400GB multi-use 3 SSDs in a single shelf to start.

We are going to be using the CPG to host virtual infrastructure (management storage, VM data, snapshots, templates). All volumes will have remote copies stored on a second 3PAR at the site (synch for manager and vm data, asynch for the rest).

According to the best practice document, we should use the default configured CPGs:

Quote:
Solid-state drive CPGs
Best practice: Solid-state drive (SSD) CPGs should be of the RAID 6 type with the default set size. This will bring superior performance/capacity
ratio and 7+1 on systems that have enough SSDs to support 7+1. If maximum performance is required, use RAID 1.

The default CPG config is 10+2 Raid 6, or 4+2 Raid 6. Is there a good reason NOT to use 22+2 Raid 6 instead? This should give us more spindles for better performance. The extra speed of the SSDs should limit our exposure to failure during rebuilds. More disks for many:one rebuilds should be a good thing.

I know traditional SANs with Raid 6, the blast zone of 24 disks is way too big. But since 3PAR is "different" I know that relying on traditional SAN designs often isn't the best idea.

Any tips would be very much appreciated.


Top
 Profile  
Reply with quote  
 Post subject: Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440
PostPosted: Mon Nov 06, 2017 1:24 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 454
I don't think you can go that high. I think R5 7+1 and R6 14+2 is the biggest.

If I were you I'ld look at 10+2. Then you will use the 12SSDs "owned" by each node in each stripe. You could go higher and utilize express layout, but then you wouldn't get 2 full stripes per 1 IO to each SSD.


Top
 Profile  
Reply with quote  
 Post subject: Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440
PostPosted: Mon Nov 06, 2017 2:13 pm 

Joined: Mon Nov 06, 2017 12:53 pm
Posts: 3
MammaGutt wrote:
I don't think you can go that high. I think R5 7+1 and R6 14+2 is the biggest.

If I were you I'ld look at 10+2. Then you will use the 12SSDs "owned" by each node in each stripe. You could go higher and utilize express layout, but then you wouldn't get 2 full stripes per 1 IO to each SSD.


Thank you for the suggestion!

I am still a 3PAR novice - if I have a CPG with a set size of 10+2, that means I will only have a total amount of space available to a VV equal to 10*N, where N is the size of the disks, right?


Top
 Profile  
Reply with quote  
 Post subject: Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440
PostPosted: Mon Nov 06, 2017 2:50 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 454
msarro wrote:
MammaGutt wrote:
I don't think you can go that high. I think R5 7+1 and R6 14+2 is the biggest.

If I were you I'ld look at 10+2. Then you will use the 12SSDs "owned" by each node in each stripe. You could go higher and utilize express layout, but then you wouldn't get 2 full stripes per 1 IO to each SSD.


Thank you for the suggestion!

I am still a 3PAR novice - if I have a CPG with a set size of 10+2, that means I will only have a total amount of space available to a VV equal to 10*N, where N is the size of the disks, right?


Well almost. Some GBs go to logging, system reporter and other stuff. And you loose space to sparring (to handle failed drives until the drives are replaced). I think minimum sparring would be 10% with the 400GB drives.

Just curious, why not looking at 1.92TB drives or something like that? Price per GB would be lower, but you also have slightly lower performance...
Btw, are you looking at 2-node or 4-node 8440? If you're looking at 4-node my option might not be an option as you would need 6 or 8 SSDs per node pair. You also can't span a stripe across node pairs, so 4 nodes and 24 SSDs, 10+2 would be max with express layout.


Top
 Profile  
Reply with quote  
 Post subject: Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440
PostPosted: Mon Nov 06, 2017 3:29 pm 

Joined: Mon Nov 06, 2017 12:53 pm
Posts: 3
MammaGutt wrote:
msarro wrote:
MammaGutt wrote:
I don't think you can go that high. I think R5 7+1 and R6 14+2 is the biggest.

If I were you I'ld look at 10+2. Then you will use the 12SSDs "owned" by each node in each stripe. You could go higher and utilize express layout, but then you wouldn't get 2 full stripes per 1 IO to each SSD.


Thank you for the suggestion!

I am still a 3PAR novice - if I have a CPG with a set size of 10+2, that means I will only have a total amount of space available to a VV equal to 10*N, where N is the size of the disks, right?


Well almost. Some GBs go to logging, system reporter and other stuff. And you loose space to sparring (to handle failed drives until the drives are replaced). I think minimum sparring would be 10% with the 400GB drives.

Just curious, why not looking at 1.92TB drives or something like that? Price per GB would be lower, but you also have slightly lower performance...
Btw, are you looking at 2-node or 4-node 8440? If you're looking at 4-node my option might not be an option as you would need 6 or 8 SSDs per node pair. You also can't span a stripe across node pairs, so 4 nodes and 24 SSDs, 10+2 would be max with express layout.



Re: the 400G disks vs 1.92T - that was a Layer 8 decision, we were overruled. Short term savings at the expense of expansion expenses down the road :cry:

We are looking at the 2-node 8440.


Top
 Profile  
Reply with quote  
 Post subject: Re: SSD Raid 6 - 10+1 or 22+2 on a 3PAR 8440
PostPosted: Tue Nov 07, 2017 4:42 am 

Joined: Wed Nov 19, 2014 5:14 am
Posts: 451
The set size doesn't limit the number of drives you can address it's just related to protection overhead. With 10+2 each node will assemble chunklets and build LD's of that size at the back end so as a minimum this will be 2 x 10+2, however at the front end this will appear as contiguous space. So you can create a volume of any supported size and these will automatically be shared across LD's and so touch all drives and capacity across both nodes.

From a capacity perspective think of it like 20+4.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 6 posts ] 


Who is online

Users browsing this forum: Bing [Bot], Google [Bot] and 10 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt