HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 14 posts ]  Go to page Previous  1, 2
Author Message
 Post subject: Re: Mixed drive sizes in a CPG?
PostPosted: Wed Jun 19, 2019 2:21 pm 

Joined: Wed Jan 16, 2019 4:26 pm
Posts: 37
Location: Florida
Hi all,

I've started this thread two months ago. We're creating new LUNs with VMFS6 datastores and migrating all the data from VMFS5, so this is a good time to finally address our mixed CPGs.

In the interim the 450GB drives reached 100% full. I've created a new CPG for the 1.2TB drives and another for the 900GB drives. New vvols were created in these and I've been migrating VMs into them.

As I remove the old vvols from the mixed CPG, the fill level of the 450GB drives has slowly backed down. They're now in the 75-85% full range. At some point I'd like to edit the old CPG to kick out the larger drives.

What happens if my math is wrong, I edit the CPG, run a tunesys, and all the data won't fit within the CPG? With our other 3PAR we've simply had a warning about CPG growth that violates its constraints (and the CPG grabs space from some other disk) so I would guess this unit will do the same thing. Is this behavior configurable?

I just want to make sure I don't bring down a bunch of VMFS datastores if something goes wrong. This SAN has about 40% of its raw capacity free so I'm not worried about running out of space overall...


Top
 Profile  
Reply with quote  
 Post subject: Re: Mixed drive sizes in a CPG?
PostPosted: Wed Jun 19, 2019 3:16 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1571
Location: Europe
Tunesys is stupid. It takes a "chunk" (LD or chunklet depending on some factors), reads it, throws it backs as writes and deletes the old "chunk". If you run all out of space it can't write so it won't delete.

As long as the CPG can grow (with capacity from same or lower tier) it will grow with degraded parameters (ie not following the CPG settings) and move on.

It should be a fairly easy thing to do the maths to see if there is space.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: Mixed drive sizes in a CPG?
PostPosted: Wed Jun 19, 2019 5:19 pm 

Joined: Wed Jan 16, 2019 4:26 pm
Posts: 37
Location: Florida
Thanks! One more question if I may... The final goal is to have three CPGs, one for each drive size. How should I handle snapshots? I've read other threads that discuss the virtues of having separate copy CPGs. Should I plan for six CPGs -- one data and one copy for each drive size?

Edit: I just had another idea... Dedicating a subset of the smallest drives to a RAID 1 copy CPG. This would provide a high performance "landing zone" for incoming writes (we have 1 or 2 daily snapshots of every volume present at all times) without incurring the usual RAID5/6 write penalty. Thoughts?


Top
 Profile  
Reply with quote  
 Post subject: Re: Mixed drive sizes in a CPG?
PostPosted: Thu Jun 20, 2019 2:09 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1571
Location: Europe
richard612 wrote:
Thanks! One more question if I may... The final goal is to have three CPGs, one for each drive size. How should I handle snapshots? I've read other threads that discuss the virtues of having separate copy CPGs. Should I plan for six CPGs -- one data and one copy for each drive size?

Edit: I just had another idea... Dedicating a subset of the smallest drives to a RAID 1 copy CPG. This would provide a high performance "landing zone" for incoming writes (we have 1 or 2 daily snapshots of every volume present at all times) without incurring the usual RAID5/6 write penalty. Thoughts?


I would create one copy cpg for everything, and put it on the same config as your biggest FC CPG.

RAID1 seems like a waste... Remember that 3PAR does COW, so when you overwrite a block on a volume with a snapshot it will move data to the snapshot and then write the new block. If you overwrite the same block again, it will just write. So you are essentially putting RAID1 on something that will never be read and only be written once (per snapshot). And from a host performance perspective, write ack is given when the new write is written to cache so it doesn't have to wait for the COW to complete. The only penalty is backend.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 14 posts ]  Go to page Previous  1, 2


Who is online

Users browsing this forum: No registered users and 31 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt