HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 7 posts ] 
Author Message
 Post subject: Optimal way to present VV's to ESX? / CPG selection
PostPosted: Mon Feb 17, 2014 2:36 pm 

Joined: Tue Feb 11, 2014 11:33 am
Posts: 35
We have a nice new 7400-2n with three tiers, pretty normal CPGs, R5, R5, and R6

The general idea is we are going to let AO do it's thing and move data around as it sees fit (it's what we paid for right?)


As far as creating the VVols however do you just make all of the storage have a single CPG (the middle tier?) and then let AO move data around?


Once AO starts moving data then the VVol doesn't really exist on any single CPG (or may not, more accurately).

For bulk data movement into the system from our old storage should I just stick it all in FC (or nearline?) and have AO move up data as needed?

I've got about 10TB of FC and 174TB of NL.


My Equallogic brain is trying to wrap my head around it.

Thanks!

Spencer


Top
 Profile  
Reply with quote  
 Post subject: Re: Optimal way to present VV's to ESX? / CPG selection
PostPosted: Tue Feb 18, 2014 2:15 pm 

Joined: Tue May 07, 2013 1:45 pm
Posts: 216
We designed our CPG's to land everything on FC and let AO move things around as it will, however your system doesn't seem to have enough FC to do that unless your existing array is quite small. Perhaps you'll want to have nonprod and prod CPG's and have Prod land on FC and nonprod land on NL and then only move the hottest blocks up? We also don't let nonprod vvols tier up to tier 0 except for a CPG we built specifically for database servers.


Top
 Profile  
Reply with quote  
 Post subject: Re: Optimal way to present VV's to ESX? / CPG selection
PostPosted: Wed Feb 19, 2014 10:19 pm 

Joined: Tue Feb 11, 2014 11:33 am
Posts: 35
You're right, I don't have enough FC, however what if I just over-provision the FC (export all LUNs from the FC CPG) and only move in a few TB at a time, and let AO move the data for a day or two and import more data?



If I create vvols in the NL CPG then writes to those will always go to NL first right?

What about making some NL vvols, bulk moving in data, and then switching the vvol to a FC CPG, without actually moving the data with DO.

That would allow AO to move data around, and it would also make writes to those go into FC, right?


We may lock a few specific things to a specific class with a new CPG, but optimistically I'd like everything to default to tier1 and let the system figure it out from there.


Top
 Profile  
Reply with quote  
 Post subject: Re: Optimal way to present VV's to ESX? / CPG selection
PostPosted: Mon Feb 24, 2014 6:38 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
Correct, all NEW writes will go into the CPG the VV is created in.

If you started in SATA, and let AO promote data up to fill your SSD and fill your FC, your read speeds would be great, as long as your reading data older than your AO cycles, but your writes would suffer as well as reads to new data, until AO could come around and run again.

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: Optimal way to present VV's to ESX? / CPG selection
PostPosted: Tue Feb 25, 2014 7:15 pm 

Joined: Tue Feb 11, 2014 11:33 am
Posts: 35
Thanks Richard.

I've seen your AO config elsewhere on the board but we're significantly smaller than you (112 disks total)


Here's what we ended up doing:

A majority of the data got loaded into nearline.
"Mid-level" performance things, such as SQL and Exchange got loaded into FC
The only thing we put into SSD was a small VDI LUN.

Set AO for all 3 tiers, and balanced, and let it rip.

We're gonna let it run for a while and see how it performs. The FC alone can run circles around the SAS we had in our Equallogics.

We may end up changing AO to Performance, I want tier0 and 1 as full as possible, we paid for it right?

At the moment we only have 3 CPGs (SSD R5, FC R5 and NL R6). We don't have any applications we want to lock into a tier, so everything is just part of the AO policy.


Top
 Profile  
Reply with quote  
 Post subject: Re: Optimal way to present VV's to ESX? / CPG selection
PostPosted: Thu Mar 06, 2014 3:11 pm 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
Thanks for sharing.

Where do you stand on thin provisioning? I believe the Best practice is Thin on the 3PAR with the VVs zero_detect enabled, and Thick Eager Zero in ESX. When eager zeroing occurs, it should reclaim space back to the 3PAR.

Since all your storage starts in the middle CPG and AO does the rest, do you bother with any VMware storage DRS? I can imagine how DRS and AO may counteract each other on a bad day. I am just learning about VASA, and how it works between 3PAR and ESX DRS, so I'm no expert yet, by I believe its a key piece to ensuring DRS doesn't move a vmdk from one datastore to another datastore which share the same CPG/storage tier on the back end.

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: Optimal way to present VV's to ESX? / CPG selection
PostPosted: Tue Mar 11, 2014 9:50 pm 

Joined: Tue Feb 11, 2014 11:33 am
Posts: 35
We're pretty indifferent about the zeroing method in VMWare. We thick provision everything in VMWare and thin everything on the back end.


We have SDRS enabled for both pools in VMWare (One for FC and one for NL) however automation is disabled. What we do use SDRS for is placement and movement of VMs to accommodate new machines/disks.

With SDRS on if I try and add a 1TB VMDK it might say "Okay, stick it on NL-0, it has the space" or it might say "Hey, we don't have a 1TB chunk available, but we can move these 5 VMs around and get you that 1TB"

At that point you just click Accept and let it do it's thing.


After a lot of moving stuff around I've run the zero reclaim on the VMFS volumes themselves (vmkfstools -y)


Realistically though what I'd like to see if for VMWare to understand how a virtual SAN works, and instead of having to manage individual LUNs and VMFS datastores the storage can say "Hey, here is 200TB of Nearline, and another 50TB of FC, do with it what you will" and we wouldn't have to worry about 2TB datastores and shuffling VMs around all the time.

ESXi 5.5 adds a lot of great features for bigger RDMs and VMFS volumes but they are still fundamentally broken. For example you can have VMDKs > 2TB, but you can't expand them online. Kind of defeats the whole purpose.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 7 posts ] 


Who is online

Users browsing this forum: No registered users and 344 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt