Production 3PAR has AO fully implemented and working (data distributed across SSD/FC/NL). Now setting up Peer Copy replication. AO distribution is not replicated to the DR 3PAR from what I can tell - everything gets dumped in the USR Cpg defined for the remote VV. Couple of observations, the first being that I will quickly run out of Tier1 (FC) disk space on the DR 3PAR. Second, even if I did have that much space in Tier1 it would be stone cold upon fail-over and it could take days to heat up. If I crank up AO to run on the DR 3PAR, and ran it using IO samples from the DR 3PAR I am not sure that would reflect IO patterns on the Production 3PAR. I would think the IO in DR would reflect the ongoing write activity from the remote copy syncs. Have any of you tried to do this and if so how did you implement it so that the tiered data was warm?
Ken
Remote Copy between 3PARs with AO - a few qestions
Re: Remote Copy between 3PARs with AO - a few qestions
We fight this issue all the time and have been requesting that the product be AO aware fro replication such that data would land in the corresponding tier. What you can do is set up AO on the other side and just sue the mode (Performance/Balanced/Cost) to skew where the block land.
Big warning here:
If you pick COST then most of the DR side will fall to lowered tier, but be warned the array if you run out of the lowest tier and the top tiers fill will deny writes. If you only have a single AO policy you will probably be fine, just don't let the USR CPG be the lowest tier, but we run many and have t constantly fight how the data gets laid out.
Big warning here:
If you pick COST then most of the DR side will fall to lowered tier, but be warned the array if you run out of the lowest tier and the top tiers fill will deny writes. If you only have a single AO policy you will probably be fine, just don't let the USR CPG be the lowest tier, but we run many and have t constantly fight how the data gets laid out.
Re: Remote Copy between 3PARs with AO - a few qestions
Seems to me your warning applies whether or not AO is running on the target 3PAR or not. Either way since RC copies are happening all the time and AO is not, the USR CPG could fill up. HP does not think that running AO on the DR 3PAR will do much since the only writes are from the RC. Has it been your experience that with it set to COST at least some of it will get shuffled out of FC to NL?
We could not be the only ones that have run into this issue. Anyone with tiers of storage, who is using AO, and is trying to replicate with RC, should be hitting this. Providing the remote 3PAR with the IO stats from the production 3PAR necessary to generate the region moves does not seem like a huge feature to implement.
We could not be the only ones that have run into this issue. Anyone with tiers of storage, who is using AO, and is trying to replicate with RC, should be hitting this. Providing the remote 3PAR with the IO stats from the production 3PAR necessary to generate the region moves does not seem like a huge feature to implement.
Re: Remote Copy between 3PARs with AO - a few qestions
If you don't run AO on the target array then as you fill the higher tier CPG all new writes will go to NL. We still run AO to allow the array to balance the data out, it just does not have real IO patterns to make that determination. I do not know if the actual writes from RC impact AO.
Re: Remote Copy between 3PARs with AO - a few qestions
If the target VV's defined usr space is the FC CPG, how would it fill the NL tier when it ran out of space in FC? Why would it not just fill up the FC CPG and stall? I will run AO in DR as you suggest, in Cost Mode, and see what happens. Thank for the advice.
Ken
Ken
Re: Remote Copy between 3PARs with AO - a few qestions
Because it is a safety net the 3par designers built into the product since they assumed NL would be plentiful and FC would be more rare (especially SSD) and you want to do everything possible to avoid a denied write condition.
- Richard Siemers
- Site Admin
- Posts: 1333
- Joined: Tue Aug 18, 2009 10:35 pm
- Location: Dallas, Texas
Re: Remote Copy between 3PARs with AO - a few qestions
If it were me, I would feel better if the DR side policy was set for "cost" mode, pushing everything down to NL and freeing up the FC for new replication... then in the event of a failover, adjust the policy for desired performance. Yeah, that would take a day or two to "warm up".
Richard Siemers
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
The views and opinions expressed are my own and do not necessarily reflect those of my employer.
Re: Remote Copy between 3PARs with AO - a few qestions
Here is the concern with Cost mode. You wind up with most of your data in the NL since there is little or no hot data. The issue you then have in a DR scenario is high latency as the array warms up and then potentially days of delays as AO bring the hot blocks back to life. We feel this pain every DR test we do. Also if you run cost mode make sure to set CPG warnings on your lowest tier as to make sure you never fill your NL drives. AO will use the CPG growth warning to prevent AO from going past that number on that CPG when moving data in.
Side note, if you set the warning to a number lower than the current CPG size AO will NOT attempt to move data out to meet the new lower threshold. You still may get the CPG to shrink as AO naturally moves out hot blocks to go back up, but it will not aggressively try and vacate the excess.
Also, keep in mind that if a higher tier fills up and there is no NL space then you can have a deny write situation. So if you fill a FC tier and have NL still available (regardless if it is in the AO configuration for the volume) then the array will not deny the FC write, but will instead defer it to NL. We are still waiting on 3par to confirm what happens in an array that only has SSD and FC drives, if the SSD fills with the write be deferred to FC if available?
For us we set all our DR array AO configs to performance to keep as much data up top as possible even if some of it is un-accessed. With AO set to cost ont he DR side our NL was ruinng over 95% utilized and our FC was under 40% utilized, again very painful when testing, not to mention if we had a real DR. Now our FC runs 90% in FC and less than 85% NL.
Side note, if you set the warning to a number lower than the current CPG size AO will NOT attempt to move data out to meet the new lower threshold. You still may get the CPG to shrink as AO naturally moves out hot blocks to go back up, but it will not aggressively try and vacate the excess.
Also, keep in mind that if a higher tier fills up and there is no NL space then you can have a deny write situation. So if you fill a FC tier and have NL still available (regardless if it is in the AO configuration for the volume) then the array will not deny the FC write, but will instead defer it to NL. We are still waiting on 3par to confirm what happens in an array that only has SSD and FC drives, if the SSD fills with the write be deferred to FC if available?
For us we set all our DR array AO configs to performance to keep as much data up top as possible even if some of it is un-accessed. With AO set to cost ont he DR side our NL was ruinng over 95% utilized and our FC was under 40% utilized, again very painful when testing, not to mention if we had a real DR. Now our FC runs 90% in FC and less than 85% NL.