HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 10 posts ] 
Author Message
 Post subject: Disk upgrade
PostPosted: Wed Apr 18, 2018 11:19 pm 

Joined: Tue Jun 16, 2015 10:54 am
Posts: 28
We have recently upgraded our 3par with new 1.2TB disk where the existing disks are in 800GB. Now in our 3PAR, we have both 800GB and 1.2TB under same CPG. i can see data started utilizing this new disks because of tunesys. HP didn't created a new CPG while allocating the disks.

1. Is this good approach to keep both different disks in the same CPG (All other SPECS are same like device RPM).

2. If the new disks needs to add to the new CPG, how can we do that. Do we need to remove those disks from existing CPG and add to new CPG ? Since this is a prod environment, Is there any kind of data loss because of this ?

3. We have Dynamic optimization feature available.


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Wed Apr 18, 2018 11:49 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
I'm guessing the old drives are 900GB (which no longer is available).

According to best practice it is generally okay to mix different sized disk in a CPG as long as the smallest drive is equal or more than 50% of the capacity of the biggest drive. The only issue is that the IOPS per GB is lower on bigger drives, so if your performance utilization on the old drives was like 90%, you will get a problem down the line when then 1.2TB drives are more than 900GB full. I've not seen it, but I'm pretty sure that this corner case exists on some array somewhere in the world :).

So if I were you I would just keep calm and keep only the existing CPG.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Thu Apr 19, 2018 12:34 am 

Joined: Tue Jun 16, 2015 10:54 am
Posts: 28
Thanks for your reply.

As you mentioned, once the disk utilization becomes 90% on 900GB disk, still we have more than 250GB/1.2TB disks.

In this particular moment, how its going to behave. 900GB disks still grow gradually till 100% and then it start utilizing 1.2TB balance disk space ? I hope this scenario will create performance issues.

Still you recommend to put it in same CPG.


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Thu Apr 19, 2018 3:14 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
I was making an example of 90% performance utilization while you seem to be talking about capacity.

When you manage a storage array you have 2 drivers for doing an upgrade. You either need more performance or you need more capacity.

If your driver for upgrading was capacity then it most likely will not become any issue. It will most definitely not be an issue in the near future. The corner cases here is if you were reaching 75% performance utilization of your drives as you were reaching 100% of your capacity utilization.

If your driver for upgrading was performance you should have bought smaller and/or faster drives and keeping everything in one CPG might become an issue.

Doing extreme simplification:
10k SAS drives assume a performance of 150 random IOPS that will give you an average of 0.16666667 IOPS per GB for 900GB drives and 0.125 IOPS per GB on 1.2TB. So a 1.2TB drive is slower "per GB" than a 900GB with the same specs. Okay?

If the backend "IO density" of your system was 0.15 IOPS per GB you would never have any problems with 900GB drives and when there are no issues you don't go looking for them. However if your backend "IO density" is the same for your data growth you will see that the 1.2TB drives doesn't deliver enough "performance per capacity" to meet your requirement. So when the system grows and your 900GB drives are full and your 1.2TB drives are at 900GB .... everything is good. But going forward with further growth your 1.2TB aren't able to keep up. First of all the 1.2TB drives are the only drives with additional capacity so all new writes will hit these drives. That will create an unbalance in the load on the drives where the 1.2TB needs to deliver more IOPS than your 900GB drives. In addtion to that, they will have more data stored on them which will produce more IOPS.....

So as long as the 900GB drives doesn't fill up (and you will stay below 900GB capacity usage on the 1.2TB) you will have the exact same "per drive" performance on both new and old drives. And as long as you weren't close to any backend performance issues prior to your upgrade, you shouldn't see any difference between the 900GB and 1.2TB drives.

And if you were seeing performance issues on the 900GB before upgrade, then you can do whatever you want to do (multiple CPGs, DO, etc) but it will not change the fact that you will run out of performance before you run out of capacity if the IO density is the same for the future growth. I would actually say that keeping everything in one would be a benefit as you would have all the performance available in one CPG where you have all your volumes, rather than having X amount of IOPS available for one set of VVs and Y amount of IOPS available for another set of VVs. The drawback is that when you reach the maximum performance of the drives, you will impact all VVs at the same time but it will take longer time to get there compared to multiple CPGs.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Thu Apr 26, 2018 2:25 pm 

Joined: Sat May 21, 2016 7:55 am
Posts: 5
Hi all,

I'm sorry to hijack this thread but I'm interested by the same problem, except that in my case the disks have a capacity difference of more than 50%:

480GB MLC vs 1,92TB cMLC

Mammagutt your explanation is very exhaustive for the case but what approach would you suggest in the event that:

- a cpg has already exist for 480GB disks with devtype specifier == SSD and RPM == 100
- the new 1.92TB disks also have RPM == 100
- you want to keep those new disks segregated from the old ones at CPG level

My first idea is to:

1) create a new CPG for old disk using a pattern -devid == SSD model
2) TuneVV all VVs to new CPG
3) remove old CPG
4) create new CPG for new disks using again pattern -devid and start using it for new VVs

It's this approach correct ?

Thanks in advance.

Regards,

Antonio


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Thu Apr 26, 2018 3:24 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
I would use tc_lt and tc_gt parameter as a replacement drive might have another devid, while size will generally not.

For SSDs I might take a different approach. Depending on the number of 480 and 1.92 drives and system/number of nodes it might not really matter. With spinning media your backend (disks) will (almost) always be the limiting factor, while with SSDs the nodes will very quickly be the limiting factor.

If you have a non-technical reason for wanting to seperate the disks, I would just change the existing CPG, create a new one, tune the volumes you want to move and just run tunesys to clean up whatever chunklets that has ended up in the wrong CPG.

Your way isn’t wrong, but it will be more manual tasks/time consuming.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Fri Apr 27, 2018 8:49 am 

Joined: Sat May 21, 2016 7:55 am
Posts: 5
>I would use tc_lt and tc_gt parameter as a replacement drive might have another devid, while size will generally not.
good point and suggestion..I've totally overlooked the potential different model issue with replacement parts!

>For SSDs I might take a different approach. Depending on the number of 480 and 1.92 drives and system/number of nodes it might not really matter. With spinning media your backend (disks) will (almost) always be the limiting factor, while with SSDs the nodes will very quickly be the limiting factor.

I got the point..and You are of course correct considering that we have 2x 2node/4cage 8200, each one has those SSD:
16 old 480 MLC + and 8 new 1,82 cMLC

>If you have a non-technical reason for wanting to seperate the disks, I would just change the existing CPG, create a new one, tune the volumes you want to move and just run tunesys to clean up whatever chunklets that has ended up in the wrong CPG.

well the only "tecnical reason" it's the supposed better write endurance of MLC vs cMLC:
we are slowly but steadly moving away from ppers+rcopy and refactoring all HA/DR things into higher application stacks (combination of SIOS Datekeeper,LBs, SQL DAG,ecc) to have a more granular control and visibility and, last but not least, a more streamlined servicing (and emergence) procedures/workflows;
My actual consideration were to use a CPG r5_3+1 with the MLC ssd for "write intensive" but "ephemeral" object like TempDBs,SIOS or Windows "logs/journal volumes", and new cMLC for data with a CPG R6_62..
since those volumes are write oriented but require small total space and because MLC it's supposed to have more write endurance I was thinking that having a separated CPG as a good idea;
however thinking about it a bit more, considering that the MLC disks are 2 years old and that the new cMLC nand/controllers have definitely better specifications, I can probably consider them equivalent and do not worry about having two CPGs =)

Thanks again for your usefull answers.

Regards,

Antonio


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Fri Apr 27, 2018 10:17 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
If I'm not mistaken the SSD warranty includes wear-out, so unless you're planning to keep the array for more than 7 years, wear-out shouldn't even be a discussion :)

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Wed May 02, 2018 12:47 am 
Site Admin
User avatar

Joined: Tue Aug 18, 2009 10:35 pm
Posts: 1328
Location: Dallas, Texas
Quote:
we have 2x 2node/4cage 8200, each one has those SSD:
16 old 480 MLC + and 8 new 1,82 cMLC


You have two separate 8200s or a single 8200 shelf of disk (with 2 nodes) + 4 cages?
When you say each one of those has 16 old and 8 new, does that mean you have 5 sets of 24 SSDs? 1 tray for the node shelf, plus 4 additional dumb shelves?

I am trying to visualize the number of SAS loops and how your shelves are attached, and more importantly how many of the NEW SSD's there are per SAS loop. If you have enough new SSDs per SAS loop that write performance will be bottlenecked by the loop and not the PDs, then it might be ok to leave them all in one CPG.

After rebalancing, and all your PDs grow to the point that the 480gb drives are full, your new writes will be concentrated on the new drives. On a 4 node system, 32 drives was the sweet spot when the SAS loops limited performance, and adding more SSD was for capacity only. Making an educated guess that 16 drives is enough to push a 2 node system at full speed. So if your baseline is writing to 120 SSD drives (bottlenecked by the SAS controllers), and reach a capacity point where you are only writing to 48 drives (also bottlenecked by SAS controllers)... I think you are safe.

_________________
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.


Top
 Profile  
Reply with quote  
 Post subject: Re: Disk upgrade
PostPosted: Wed May 02, 2018 10:49 am 

Joined: Sat May 21, 2016 7:55 am
Posts: 5
Hello Richard,

thanks for the addition note and I'm sorry for being too terse about the systems setups...

actually there are 2 separate 8200s, each one configured as follows:

- 1 node shelf plus 3 additional cages
- 4x480GB MLC disks plus 2x1,92TB cMLC for each cage (node shelf included) for a total of 16+8 = 32 SSD disks
- 48x 1,2TB 10K FC disks

Regards,

Antonio


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 10 posts ] 


Who is online

Users browsing this forum: No registered users and 82 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt