HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 9 posts ] 
Author Message
 Post subject: Volume conventions
PostPosted: Mon Oct 14, 2013 4:04 am 

Joined: Mon Oct 14, 2013 3:40 am
Posts: 17
Hi team,

I was hoping to get some input around your naming convention for volumes. The solution is to support a number of separate clients, running all types of different Windows servers (exchange, remote desktop, SQL, etc). The environment is all VMware ESXi.

I've got the CPGs setup and split into disk types, and initially we've setup a volume for each virtual machine. The reasoning behind this is that it would provide granular control around which CPG each virtual machine sat in, and therefore maximum flexibility when it came to Dynamically Optimisation. For example, if Client A's SQL server needed to sit on faster disks permanently, we could easily move it to a higher tier storage CPG without impacting other clients. We are also using AO for dynamic workloads. This would also give us good reporting around how each volume/client/server is performing.

Current configuration is as follows:

CPGs -
Adaptive Optimisation running
AORUN_NL_R6_4PLUS2_MAG
AORUN_FC_R5_3PLUS1_MAG

Adaptive Optimisation analysing
AORUN_NL_R6_4PLUS2_MAG
AORUN_FC_R5_3PLUS1_MAG

Standard CPGs, no AO
NL_R6_4PLUS2_MAG
FC_R5_3PLUS1_MAG

Volumes -
CLIENT1_DC_01, contains 1 vm with 1 vmdk
CLIENT2_SQL_01, contains 1 vm with 3 vmdks
etc


The problem we are going to face is, once we hit 250 odd virtual machines we'll run out of LUN IDs and ESXi will no longer recognise new volumes.

I considered having 1 volume per client, but that would be extremely wasteful if their SQL server was getting hit hard and we DO'd their whole volume to SAS/SSD. Alternatively, we also considered 1 volume per server type, but again if it's only 1 client's VM that is getting flogged then we are moving the entire server role base to a different tier storage.

What are your suggestions around volumes?


Thanks team.


Top
 Profile  
Reply with quote  
 Post subject: Re: Volume conventions
PostPosted: Mon Oct 14, 2013 6:51 am 

Joined: Sun Jul 29, 2012 9:30 am
Posts: 576
We name CPG as follows:

DB_FC_R1_S2 (RAID 1 FC drives 2 drive set)
DB_FC_R5_S4 (RAID 5 FC Drives 4 drive set 3+1)
DB_NL_R5_S4 (RAID 5 NL Drives 4 drive set 3+1)
SNAP_DB_NL_R5_S4

We use similar conventions for VM_ for our VMs and then AIX_ for our AIX lpars


We then set up AO polices for each group so we can tune each group to its specific needs.

I will post a volume convention shortly have a meeting to run to.


Top
 Profile  
Reply with quote  
 Post subject: Re: Volume conventions
PostPosted: Mon Oct 14, 2013 9:40 am 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
For volume names, we have used HOSTNAME_LUN#_Description and it has served us well.

Some examples:

NTFWSQLD1_1_F (A windows F drive)
NTFWSQLD1_20_TEMP_DB (DB files)
NTFWSQLD1_21_TEMP_LG (DB Log files)

R6KORAD1_2_OraVG
R6KORAD1_5_ASM
R6KORAD1_10_FlashRecovery

VMWARE_1_DEV1 (datastore name)
VMWARE_2_PRD1 (datastore name)

VMWARE_29_VMFWFSP1_RDM_G

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: Volume conventions
PostPosted: Mon Oct 14, 2013 10:25 am 

Joined: Sun Jul 29, 2012 9:30 am
Posts: 576
Some thoughts on volume names, first there is a 31 character limit and certain snapshot operations will auto generate (often invisible to you) a prefix of 6-7 characters so best advice is keep volume names such that the first 24 characters are unique system wide. We got burned early on and had to rename numerous volumes in mass. I have been asking that they remove these artificial character limits for a year, I suggest everyone ask HP to increase the limitations.

We basically use the following naming conventions:

x_cc_ttt_nnnnnnn.y

x = platform (v = vm, p = pSeries)
cc = cluster (db = databse, prd = production, dev = dev, inf = infrastrucuture, etc)
ttt = type of volume (vg = volume group (AIX), ds = datastore (vmware), rdm = vmware RDM)
nnnnnnn = name (for AIX lpars the lpar name, for rdms the system name the RDM is presented to and for datastores the name of the datastore)
.y = sequence number or unique identifier (for RDMs and LPARs we typically just do 0-9, but for some severs we are experimenting with more descriptive seq names such as .bu for backup lun, .log for database log lun etc. The problem is trying to start within 31 characters with beginning 24 being unique.)

For replication luns on the target side we use the same exact name with a R_ prefix so that we can tell when looking at volume names if it is replication target.

This gets tricky with snapshots:
- for snapshots we take for backups I have a script that generates a unique name based on the VC-vvvv-MMDD_HHMMSSms where vvvv is the volume ID number of the base volume and the rest is month date uniqueness

- for snapshots we take to present out to host we use SS-P-nnnnnnn.y (where the P- is for pSeries, we don;t present snaps to vms yet, if we did they woudl be SS-V-nnnnnnn.y and in both cases the nnnnnnn.y is same as base volume naming conventions.) because we have some volumes where we need to present out more than one snap the .y is description and in the since these are all AIX now the .y woudl equal the vg name in AIX)

I am sure this is more than you wanted, and a lot to digest, but if you will have many volumes, (we have over 650 base volumes right now and expect to top 1000) having a naming convention that can be deciphered without looking anything up is vital.

A logical consistent naming convention also makes exporting to excel and manipulation easy as well as scripting since many operations will take wildcards. It also helps when doing remote copy as we have similar conventions as well that tie volumes to VV Sets.


some examples:

v_prd_ds_intranet01 (vm datastore named intranet01 exported to production vmware cluster)
R_v_prd_ds_intranet01 (replicated target of same datastore exported to production vmware cluster)

v_db_ds_sqlsrv01 (vm datastore for sqlsrv01's vmdks exported to vm database cluster)
v_db_rdm_sqlsrv01.0 (vm rdm for sqlsrv01's tempdb exported to vm database cluster)
v_db_rdm_sqlsrv01.1 (vm rdm for sqlsrv01's data exported to vm database cluster)
v_db_rdm_sqlsrv01.2 (vm rdm for sqlsrv01's logs exported to vm database cluster)

vvs_v_db_sqlsrvr01 (VVSet containing all the volumes in the above example for sqlsrv01)


Top
 Profile  
Reply with quote  
 Post subject: Re: Volume conventions
PostPosted: Fri Oct 18, 2013 4:20 am 

Joined: Mon Oct 14, 2013 3:40 am
Posts: 17
Hi all,

There are some great responses here - thanks for taking the time to reply.

My understanding was that ESXI had a limit of 255 LUNs - how are you getting 600+ LUNs presented then?


Cheers


Top
 Profile  
Reply with quote  
 Post subject: Re: Volume conventions
PostPosted: Fri Oct 18, 2013 6:02 am 

Joined: Sun Jul 29, 2012 9:30 am
Posts: 576
I have multiple clusters and I also have 60+ ESXi boxes, we have 140 AIX lpars that reside on pSeries hardware. I have about 60 ESXi hosts that we have split up into like 6 clusters so you can present 255 luns to any one cluster. My AIX luns are about half of my volumes and growing. I have over 800 VMs but they only occupy about 390 volumes, the rest is AIX. Of the 390 ESXi volumes 225 of them are SQL and go to one cluster, that is my biggest concern as we are quickly coming up on the 255 limit, but we have a plan to do volume consolidation sicne we are moving SQL to vmware SRM we can collapse volumes that are vRDMs today to vmdks on datastores so a typical SQL server today uses 4 LUNs will use 2 going forward which should cut my lun count down.

Out estimate when we are done moving the meaning AIX systems to the 3par is a total of around 900+ volumes. Add to that is we plan on doing daily snapshots on most volumes that will be retained for 2-4 weeks and we will quickly have 30K+ volumes on our system. So naming conventions are extremely important to us.


Top
 Profile  
Reply with quote  
 Post subject: Re: Volume conventions
PostPosted: Mon Oct 21, 2013 2:00 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
Speaking of AIX, I like their NPIV implementation with VIO... I am not a fan of ESXs NPIV stack. The only thing that bugs me about IBM/VIO is the extra set of WWNs for partition mobility.

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: Volume conventions
PostPosted: Wed Oct 23, 2013 6:24 am 

Joined: Sun Jul 29, 2012 9:30 am
Posts: 576
I am not an AIX guy, but I do not like the extra WWNs as well, it has stopped us from doing in lpar snaps shots since we would have to virtualize the HBAs and create a few hundred extra WWNs and zones. I am not a NPIV expert, but I am fine with the way ESX does theirs since it has made things easy to use Commvault's Snap Protect in a VM easily.


Top
 Profile  
Reply with quote  
 Post subject: Re: Volume conventions
PostPosted: Fri May 16, 2014 4:26 pm 

Joined: Fri May 16, 2014 4:23 pm
Posts: 1
joe.benger wrote:
Hi all,

There are some great responses here - thanks for taking the time to reply.

My understanding was that ESXI had a limit of 255 LUNs e cig - how are you getting 600+ LUNs presented then?


Cheers

I don't understand this code name. I think I missed something. But I am still sure I am on my way.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 9 posts ] 


Who is online

Users browsing this forum: Majestic-12 [Bot] and 220 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt