HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 6 posts ] 
Author Message
 Post subject: Data Distribution Across PDs - PDs in Cage 1 Filling Up
PostPosted: Thu Jun 16, 2022 2:51 pm 

Joined: Mon Mar 21, 2022 4:24 pm
Posts: 6
Hello,
I’ve been asked to take over our 7400c 4 cage iSCSI 3PAR which hasn’t been touched in many years. The cages have a mix of two different sizes of FC as well as some SSDs. An incident occurred where the smaller drives (500GB vs 1TB) in Cage 1 filled up completely causing an outage. We worked with HPE and were able to free up some space, they also had us change the Set Size of the offending RAID 5 CPG temporarily until we were able to procure additional disks. While we still had support with HPE I was able to update the Service Processor and the OS. However our support with HPE expired before the new disks arrived.

I was able to add the additional disks, 8 additional drives in cages 2 and 3, and the system automatically discovered the drives however I ran AdmitPD just to be sure. I then changed the setsize back to 6 from 3 which HPE advised us to do. I then have run tunesys multiple times and see data being distributed to the new disks which is good. However, the smaller 500GB drives in Cage 1 don’t seem to be getting data distributing off of them. No matter how many times I run tunesys, even with chunkpct and dskpct set to 2% as well as trying a nodepct of 10% without success. The drives will not get below 90% full.

My question is how do I get data distributed off these almost full disks and ensure that future CPG growth will be evenly distributed among all disks?

Any advice would be greatly appreciated, this user group has already helped me with both general understanding as well as a few specific issues I encountered. I’m new to administering a SAN and this system was dumped in my lap and sadly the previous administrator refuses to give me any assistance. Although I don’t know that I would care much about his opinion anyway considering the state of the system when I was asked to take it over, so there's that. :roll:

Thanks in advance to you 3PAR experts!


Top
 Profile  
Reply with quote  
 Post subject: Re: Data Distribution Across PDs - PDs in Cage 1 Filling Up
PostPosted: Thu Jun 16, 2022 3:27 pm 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
What does your configuration look like(drives, cages, nodes)?

3PAR is pretty simple. If the hardware configuration is balanced, the data will be balanced. If it isn't it will grow uneven and depending on the configuration you may have a lot of capacity you can't use.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: Data Distribution Across PDs - PDs in Cage 1 Filling Up
PostPosted: Thu Jun 16, 2022 5:05 pm 

Joined: Mon Mar 21, 2022 4:24 pm
Posts: 6
Hello, thanks for your response!

We have 4 cages and 4 Controller Nodes. One RAID5 FC CPG and one RAID5 SSD CPG.

Cage 0 has a mix of 16 500GB FC and 8 400GB SSD drives and the average Allocated Percentage is ~66%.

Cage 1 has a mix of 16 500GB FC and 8 400GB SSD drives and the average Allocated Percentage is ~90%.

Cage 2 is all 1TB FC and includes 8 new drives recently installed and the average Allocated Percentage is ~40%.

Cage 3 is all 1TB FC and includes 8 new drives recently installed and the average Allocated Percentage is ~30%.

Cheers!


Top
 Profile  
Reply with quote  
 Post subject: Re: Data Distribution Across PDs - PDs in Cage 1 Filling Up
PostPosted: Fri Jun 17, 2022 12:46 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
Danoman0man wrote:
Hello, thanks for your response!

We have 4 cages and 4 Controller Nodes. One RAID5 FC CPG and one RAID5 SSD CPG.

Cage 0 has a mix of 16 500GB FC and 8 400GB SSD drives and the average Allocated Percentage is ~66%.

Cage 1 has a mix of 16 500GB FC and 8 400GB SSD drives and the average Allocated Percentage is ~90%.

Cage 2 is all 1TB FC and includes 8 new drives recently installed and the average Allocated Percentage is ~40%.

Cage 3 is all 1TB FC and includes 8 new drives recently installed and the average Allocated Percentage is ~30%.

Cheers!


Yeah.... That will make a mess assuming everything is in one CPG :) 500 and 1TB FC doesn't exist but I'm assuming this is 600GB and 1.2TB.

Assuming your numbers of 500GB and 1TB.

You have 8TB of FC capacity in cage 1 and 2.
You have 24TB of FC capacity in cage 3 and 4.

If you throw 20TB of backend data at that configuration, you will get 5TB per cage. That means that for cage 1 and 2, each FC drive will get ~312.5TB each and in cage 3 and 4, each FC drive will get ~208GB each. That would be 62.5% on 500GB drives and 20.8% on the 1TB drives.

A perfect configuration of your system would have been 4x SSD, 8x 500GB and 12x 1TB drives per cage. But that is probably a big hazzle to sort out now :D

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
 Post subject: Re: Data Distribution Across PDs - PDs in Cage 1 Filling Up
PostPosted: Fri Jun 17, 2022 10:30 am 

Joined: Mon Mar 21, 2022 4:24 pm
Posts: 6
Hi,

Thanks for the response. I thought that our drive placement was sub-optimal and it sounds like tunesys will not balance out the data across all disks like I had hoped.

Is there a way to manually distribute data to underutilized disks? I’m just worried that the disks in Cage 1 will fill up again.

I appreciate your advice.

Thanks!


Top
 Profile  
Reply with quote  
 Post subject: Re: Data Distribution Across PDs - PDs in Cage 1 Filling Up
PostPosted: Fri Jun 17, 2022 11:00 am 

Joined: Mon Sep 21, 2015 2:11 pm
Posts: 1570
Location: Europe
Danoman0man wrote:
Hi,

Thanks for the response. I thought that our drive placement was sub-optimal and it sounds like tunesys will not balance out the data across all disks like I had hoped.

Is there a way to manually distribute data to underutilized disks? I’m just worried that the disks in Cage 1 will fill up again.

I appreciate your advice.

Thanks!


You can manually move chunklets to "fix your current issue". But the 3PAR will grow as mentioned, so over time it will go back to how it is now.

The big question now it, does it "really" matter? You have 24 drives in cage 2 and 3. Assuming that you are running 3.2.2 or older you do not have express layout but each node has 12 FC drives each. So unless you have a set size higher than 12 you should be able to access all the capacity on the drives... However you will see that the larger drives will have to deliver a higher number of IOPS once the smallest ones are filled.

There are options (supported and unsupported) to move disks around in the system to balance your system out. But with all cages filled, you've maybe lost the easiest (unsupported) way of doing it.

_________________
The views and opinions expressed are my own and do not necessarily reflect those of my current or previous employers.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 6 posts ] 


Who is online

Users browsing this forum: Google [Bot] and 61 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt