HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 6 posts ] 
Author Message
 Post subject: CPG Best practice
PostPosted: Mon May 09, 2011 8:27 am 

Joined: Mon May 09, 2011 8:18 am
Posts: 2
I am very new the world of 3PAR. I am configuring a new array for use as the back-end storage for Microsoft Exchange. This array will also be used for an Digital Imaging system. I will have a total of 86 physical drives in the array. My question is when I create CPGs should I limit them to certain cages? If so do I use the filtering field when I create the CPGs?

Thanks
ECB


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG Best practice
PostPosted: Mon May 09, 2011 10:49 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
Hello Enrique,

Here is the 3PAR implementation guide for exchange:
Attachment:
3PAR Exchange Best Practices.pdf [419.13 KiB]
Downloaded 3300 times


I would not filter CPGs by drive cage but instead filter it by magazine position. This way your CPGS will stripe across cages for better performance and symmetry. If you choose to filter your CPGs to adhere to Microsoft Best Practices, I would filter half your magazine positions to one CPG, and the other half to another CPG. Then flip flop data/logs to each:

For example:
CPG1: = filtered to drives in magazine positions 0-4
-LUN1- EXCH DB1 DATA
-LUN3- EXCH DB2 LOG
...and so on

CPG2: = filtered to drives in magazine positions 5-9
-LUN2- EXCH DB1 LOG
-LUN4- EXCH DB2 DATA
...and so on

CPG3: = not filtered, so it uses all drives, positions 0-9
-LUNx- Imaging Data
-LUNx- Imaging Data
..and so on


Talk to your 3PAR SE about his/her recommendations for CPG creation. The advice I got from mine was unconventional by Microsoft standards, but has worked very well for us. We don't filter any of our CPGs, we let the wide striping do its job across all available spindles, and performance is excellent. We have multiple windows systems that bottle neck at the host HBA.... but we also have a great wall of spindles, 960 spindles total. What works for us, may not apply to you, so leverage your SE for help.

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG Best practice
PostPosted: Mon May 09, 2011 10:54 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
Another note about CPG creation is set size.

We use the largest set size possible that allows "cage availability". This means we can lose an entire cage of disk without data loss. You can see these options in the "advanced" options of CPG creation.

We have 12 cages, but also have 2 node pairs (4 nodes total). Our production CPGs are raid5 using a set size of 6 (5+1). We also use the default of "fastest" chunklets which uses the outer tracks of the spindle first.

For DEV/TEST luns, we use a raid5 set size of 9 (8+1) and the "slow" chunklets (inner tracks).

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG Best practice
PostPosted: Thu May 10, 2012 10:29 pm 

Joined: Mon Apr 09, 2012 8:51 pm
Posts: 9
Hi Richard,

Do you have any CPG best practice for Linux like RHEL and Ubuntu using 3PAR. I am configuring a F200 and all the servers is a virtual machine from a Ubuntu host.


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG Best practice
PostPosted: Mon Jun 11, 2012 8:05 am 

Joined: Mon Oct 10, 2011 5:46 am
Posts: 7
Hi Richard,
we are using a Netapp V-Series as a Gateway to our clients, which connect via NFS.
My question is should I build a separate CPG for the Oracle redo logs volumes?
For the best of my knowledge, best practice for Oracle is to put the redo logs on the best performing Raid -> Raid1.
Our Main CPG for provisioning storage to a Netapp aggr is build of RAID5 15K FC disks with set size 3+1(default) , we have currently 8 cages.
So I thought of creating a new RAID1 CPG, carve VV's from it and build a RAID1 aggr on the Netapp, then all redo_log volumes will be created on this aggr.
I wonder if this approach might improve performance.

thanks,
Igal


Top
 Profile  
Reply with quote  
 Post subject: Re: CPG Best practice
PostPosted: Wed Jun 13, 2012 4:22 am 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
3PARNOFEAR,

How much total redo capacity do you need? My first concern is that its relatively small capacity which when isolated into its own cpg may use fewer spindles vs the bigger cpg that uses all the spindles. Assuming your on a S/T/V400/800 and 8 cages = 320 spindles, it would take aprox 40gigs of R1 to include every spindle. If your talking about F200 cages, its even smaller. I would NOT thin provision luns assigned to the Netapp. While they work just fine, the Netapp is not a thin friendly client and in the process of doing its own housekeeping will fully allocate your thin provisioned luns in a matter of days... not a technical issue, but its a licensing issue since the 3PARs are licensed for TP by capacity... so save your licensed TP gigabytes for clients that can actually use them.

If you have the space, cutting RAID1 CPGs would certainly help to tame any DBA resistance and fear of change that may exist, adding a RAID1 aggregate on top of that is multiplying that out to 4 actual mirrors, not sure if thats your intention or an oversight.

When we deployed 3PAR, they told us to throw the old "best practices" from Oracle and Microsoft out the window with regards to disk, and let the CPG manage everything for us (no isolation of spindles for data and index or logs), that advice has served US well. Whitepapers/Practices written for legacy disk based raid systems do not translate well into 3PAR/Compellent/XIV terms. As for net performance gains, I would expect some slight improvement viewed with benchmarking tools, but I don't think it would be noticeable with a properly tuned oracle instance.

If you have Dynamic Optimizer, which I think is a must, can't live without it... Test with the redo logs on main R5 CPG. Evaluate the results, if they are unacceptable, then you can "DO" those luns over to a new R1 cpg seamlessly and transparently, and re-evaluate and compare to the previous baseline.

We use raid 0 aggs in our V-series and do all the raiding at the CPG level. However, we only serve up FS from the netapp, and all block storage is direct to 3par. All our oracle instances use ASM and raw block storage over FC,

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 6 posts ] 


Who is online

Users browsing this forum: No registered users and 65 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt