dabravokid wrote:
Hi, bit of background first, my team has inherited a badly managed 3par. It is a 4 node 7400 split between 2 DC's with 3 tiers, 8TB SSD, 150TB FC and 120TB NL tier. We have no documentation about the current setup and a full mix of AO and pinned volumes in all tiers. We've found that AO on one of the arrays is almost filling the SSD tier and NL tier, the NL tier is currently 97% and increasing day on day. The previous admin kindly suggested that is NL filled to 100% the arrays would cease writing and everything I've read suggest that is a tier runs out of space it will write to the lower tier at slower speed so his comment makes sense at this point. I've tried moving some pinned data up to the FC tier but AO then uses the space on NL again. I've looked as maximum space usage for the AO config and considered switching on of the configs from balanced to performance but unsure what the best option is. If I set to performance and we have some pinned volumes in SSD might this fill the SSD tier? If I set the Maximum Space setting in AO to be less than the current space used will the next run of AO then move data up to FC?
Thanks for taking the time to read.
Hi.
Nothing you're stating here tells me that this is a badly managed 3PAR. If this 3PAR has all volumes and all snapshots configured at FC tier, I would actually go as far as saying it's a perfectly managed 3PAR.
A few facts.
If a tier runs out of capacity, the 3PAR will allocate capacity from a lower tier.
When you've ran out of capacity on the lowest available tier, the CPG can't grow, so the VVs can't grow and eventually the hosts will get "disk full" messages when writing new data that will most likely cause them to crash. As their drives as full, they could also have issues booting back up again.
The usual configuration of a 3PAR with AO at the time the 7400 was new and shiny was to configure all tiers in one AO config and configure User CPG to a FC CPG in the AO config and Copy CPG to a non-AO FC CPG. In this configuration all new writes will hit FC and you can safely allow AO to fill both SSD and NL to 100%. If you have as little as one volume mis-configured to either write directly to SSD (or even worse, NL) the above statement is void and it is no longer safe to allow AO to fill SSD and NL tier to 100%. In a clean AO system (not running thin or deduped volumes directly on SSD), you are always aiming at maintaining the SSD layer as close to 100% as possible.
You're not saying anything about OS versions or what you're using to manage the system. With 3.2.2 (and later) HPE introduced the t0/t1/t2 min/max settings in the AO config that will allow administrators to configure both a minimum and maximum value per tier/cpg that AO will follow. Prior to 3.2.2 the only option was to use the "growth warning" parameter per CPG to ensure the maximum amount of data AO would put on the CPG/tier (basically the same as t0/t1/t2 max setting). In SSMC, all settings as available. In IMC you can only configure growth warning.
I'm also taking a stab at this one as this is an old system in probably in old configuration with little to no documentation. If you are using Vmware I'm pretty sure that you're runing VMFS5. VMFS5 does not do automatic reclaim and I would very much assume that you have a lot of allocated capacity that AO has moved down to NL which doesn't really contain any data. Vmware is "stupid" in that configuration and when you delete data from a VMFS (either deleting snapshot, deleting VM or using storage vMotion), Vmware will only delete the pointer in their file system and not cleanup the actual data on the datastore for 3PAR to reclaim. See this KB from Vmware on the subject:
https://kb.vmware.com/s/article/2057513