HPE Storage Users Group

A Storage Administrator Community




Post new topic Reply to topic  [ 12 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: AO, virtual volumes and user CPGs
PostPosted: Mon Feb 02, 2015 5:21 am 

Joined: Mon Oct 14, 2013 3:40 am
Posts: 17
Hi all,

We have AO configured to run across two CPGs; one for NL and one for FC. The AO rule is configured to use "Performance" mode, and runs once daily during the evening.

Historically and based on advice from HP, when we have created a new VV we have set the "User CPG" field to the NL CPG. This would allow AO to move things up from NL to FC as required at the end of the day. One of the concerns here is that any new data written during the day, or any temporary data which is created and removed immediately, is being subject to the slower performance characteristics of our NL storage.

So, my question is this:

1. Should we be selecting the FC CPG as the "User CPG", effectively swapping the AO operations to moving things down to NL rather than up to FC?

2. Would this mean that we would then need to use Cost mode instead of Performance mode because of the reversed direction? Assuming the answer is no here, as the documentation suggests cost mode will be a little more brutal on how much it moves down as its normally used for non-critical systems?

3. If we are swapping the AO direction (FC -> NL, instead of NL -> FC), is it fair to assume that the overall disk consumption across CPGs would be the same, except with the added benefits of new writes going to FC?


Cheers.


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Mon Feb 02, 2015 6:36 am 

Joined: Sun Jul 29, 2012 9:30 am
Posts: 576
First unless the apps are extremely low IOPS I would never write to NL first. You do not mention what Raid sets you are using for these CPGs, I will assume Raid % for the FC and Raid 5 or 6 for the NL. You also do not mention the amount of disk you have in each and what kind of apps/usage you expect. I will tell you that AO needs some work, every conversation I have had with HP engineers and product team says that the modes (Performance, balanced and cost) of AO only throttle how aggressively AO moves data down not up. AO by nature wants to push data down to the lowest cost tier regardless of how much faster storage you have. We never use cost mode any more because we find we can't keep enough data in the faster tiers.

What HP is missing here is that by pushing so much data down you crush NL drive performance even when you have large amounts of unused faster tiers. My 2 faster tiers run 50-60% utilized even with performance mode and my NL drives get crushed with IOs, this is one reason why when we bought another pair f arrays we skipped NL drives and just did SSD and 10K because we don't like the way AO moves data.

Some of this you will need to do trial and error, but I would error on the side of landing on FC and leave mode set to performance and go from there. The good news with 3par is even if you FC fills up as long as there is NL available it will still accept writes and just send them to NL, this is less than ideal, but you can gauge your array as you go without taking a penalty on NLs.


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Mon Feb 02, 2015 8:13 am 

Joined: Mon Oct 14, 2013 3:40 am
Posts: 17
Thanks hdtv guy, good response. Their use is fairly general; we tend to lock heavy database/exchange volumes to FC, and use the AO CPGs for RDS, basic database and file. Drive configuration is as follows:

FC - 48 drives - RAID5 3+1
NL - 24 drives - RAID6 4+2

What you're describing is more or less what I'm experiencing. The NL drives get flogged constantly, which is an issue given there's so much capacity still available. I need to learn to get some better info out of System Reporter so I can understand which VVs need tuning, but I thought I would start with the user CPG as I'm sure all new writes to NL would be an issue.

Cheers.


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Mon Feb 02, 2015 3:51 pm 

Joined: Sun Jul 29, 2012 9:30 am
Posts: 576
NL Raid 6 will get crushed. Another thing I argue for is to let the array do what it can on its own. For our DBs we let them participate in AO, but note we only have SSD and 10K drives in that array. Depending on app even databases tend to have a lot of stale data in them that will sink to lower tiers.


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Tue Feb 03, 2015 4:01 am 

Joined: Tue Feb 03, 2015 3:35 am
Posts: 18
hello all

sorry to hijack but rather than create another AO thread thought Id post here! we need an 'AO thread' for all ao talk!

we will shortly using AO and its great so many here have real world experience of what it does with the data. We are currently deciding how to configure our setup and whether to use two or three AO configs to give us more control over where data will end up. our idea is to use allocation warnings to allow us to DO a VV if we know a system requires best performance for a period of time. So for example we want 25% free space in SSD. I am unsure though if we have two AO configs that both have CPGs using the SSD tier whether to set both allocation warnings to 75% or we split the allocation between the two. Are they 'aware' of each other so to speak?

Im reading through the guides again to see what i can find

Thanks
R


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Tue Feb 03, 2015 6:57 am 

Joined: Sun Jul 29, 2012 9:30 am
Posts: 576
rickyy7

Multiple AO configs are tricky, you will want to be careful with how you set them up and use growth warnings to govern how much AO puts in a given tier. The problem with growth warnings is you are basically capping AO for a given conifig even if there is plenty of space in that physical tier. With 3.2.1 you are allowed to run AO configs against VV Sets and I have had talks with 3par to understand how all these competing AO configs are prioritized. My biggest complaint with 3par is they often do not think through these features on how they will be used in the real world and how many problems they can introduce.

We ran 3 different AO configs on our V400 and while it works great when you have plenty of free space, it becomes very problematic if your array gets space constrained. Also if you look at some of my other posts about AO, AO needs some improvement as it will try an push data down even in Performance mode, thus you potentially wind up with very constrained lower tier disk space and free faster tiers. One thisg you do not want to do is ever run out of your lowest tier of disk. Even if you fill your fastest tier the array will always try to write to a lower tier, but will not do so if the lower tier is full.

On a scale of 1-10 I give 3par AO a 6 maybe 7, it is decent, but has a long way to go. Also with multiple AO configs you need to be careful about all the AO schedules and the amount of overhead and time it will take for all those AO schedules to get any work done, especially if they are competing for same physical tiers. And remember it is easy to crush NL drive performance.

In our experience if you ave plenty f free space in all your tiers multiple AO conifgs are ok, if you have any space limitations then managing multiple AO configs requires constant supervision. We have taken all our arrays to basically single AO configs and as we grew purchased new arrays for specific workload. We have one array for general purpose use that has 3 tiers of disk (FC, 10K and NL) which has a single AO config. We then purchased another array for our IO intense workload and it only has SSD and 10K with a single AO config. I now spend less time dealing with AO babysitting.


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Tue Feb 03, 2015 8:43 am 

Joined: Mon Aug 20, 2012 1:54 pm
Posts: 33
Location: Atlanta, GA
My experience with AO has been very positive, though I do run multiple AO configs. When I first implemented I used a single config, but found AO'ing a 400TB frame would take days to complete which made me very nervous. I basically have 6-8 configs (depending on the frame), and split them so they run on alternate days.
I also found that on days where configs run together, they should be scheduled to start at the same time because then the system seems to do a better job looking at the multiple configs as a single job. This was more an issue in 3.1.1, but I still do it today with good results.
I also found out (the hard way) that going into AO, it should be an all or nothing thing with respect to NL tier. In other words, make sure AO has full control of all your NL. Don't have some VV's that are not part of an AO config also writing to NL because you kill your performance. I would extend that to say AO should have access to all your tiers. The benefit then is you only really have to manage space on your T1 tier (assuming all your usrCPGs point to that).
The only other thing is timing. I have found AO does a good job not impacting performance on other loads, and as our frames are all 24/7/365 I don't have good down windows with which to run AO at a time when users aren't also using it. but that hasn't been a problem for us.


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Tue Feb 03, 2015 9:45 am 

Joined: Tue Feb 03, 2015 3:35 am
Posts: 18
thanks for the responses. its really good to get some discussion going on this subject area. you are certainly right about 3par not considering 'real world' use when it comes to AO.

hdtvguy wrote:
rickyy7

Multiple AO configs are tricky, you will want to be careful with how you set them up and use growth warnings to govern how much AO puts in a given tier. The problem with growth warnings is you are basically capping AO for a given conifig even if there is plenty of space in that physical tier. With 3.2.1 you are allowed to run AO configs against VV Sets and I have had talks with 3par to understand how all these competing AO configs are prioritized. My biggest complaint with 3par is they often do not think through these features on how they will be used in the real world and how many problems they can introduce.

We ran 3 different AO configs on our V400 and while it works great when you have plenty of free space, it becomes very problematic if your array gets space constrained. Also if you look at some of my other posts about AO, AO needs some improvement as it will try an push data down even in Performance mode, thus you potentially wind up with very constrained lower tier disk space and free faster tiers. One thisg you do not want to do is ever run out of your lowest tier of disk. Even if you fill your fastest tier the array will always try to write to a lower tier, but will not do so if the lower tier is full.

On a scale of 1-10 I give 3par AO a 6 maybe 7, it is decent, but has a long way to go. Also with multiple AO configs you need to be careful about all the AO schedules and the amount of overhead and time it will take for all those AO schedules to get any work done, especially if they are competing for same physical tiers. And remember it is easy to crush NL drive performance.

In our experience if you ave plenty f free space in all your tiers multiple AO conifgs are ok, if you have any space limitations then managing multiple AO configs requires constant supervision. We have taken all our arrays to basically single AO configs and as we grew purchased new arrays for specific workload. We have one array for general purpose use that has 3 tiers of disk (FC, 10K and NL) which has a single AO config. We then purchased another array for our IO intense workload and it only has SSD and 10K with a single AO config. I now spend less time dealing with AO babysitting.


thanks for taking the time to post. we are fortunate to have plenty of space to play with so we may get away with multiple AO configs. The thinking was to have 3 AO groups. AO1 across SSD/FC to ensure performance. AO2 spanning all three tiers SSD/FC/NL and AO3 spanning just FC/NL. We would have CPGs for each tier for none AO VVs and to allow DO if required.

The scheduling and data allocation for multiple AOs is where I seem to find a lack of information.

zQUEz wrote:

My experience with AO has been very positive, though I do run multiple AO configs. When I first implemented I used a single config, but found AO'ing a 400TB frame would take days to complete which made me very nervous. I basically have 6-8 configs (depending on the frame), and split them so they run on alternate days.
I also found that on days where configs run together, they should be scheduled to start at the same time because then the system seems to do a better job looking at the multiple configs as a single job. This was more an issue in 3.1.1, but I still do it today with good results.
I also found out (the hard way) that going into AO, it should be an all or nothing thing with respect to NL tier. In other words, make sure AO has full control of all your NL. Don't have some VV's that are not part of an AO config also writing to NL because you kill your performance. I would extend that to say AO should have access to all your tiers. The benefit then is you only really have to manage space on your T1 tier (assuming all your usrCPGs point to that).
The only other thing is timing. I have found AO does a good job not impacting performance on other loads, and as our frames are all 24/7/365 I don't have good down windows with which to run AO at a time when users aren't also using it. but that hasn't been a problem for us.



Interesting about your findings with scheduling. My thinking was to stagger them. Thanks for sharing your findings.


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Wed Feb 04, 2015 4:45 am 

Joined: Mon Oct 14, 2013 3:40 am
Posts: 17
zQUEz wrote:
I also found out (the hard way) that going into AO, it should be an all or nothing thing with respect to NL tier. In other words, make sure AO has full control of all your NL. Don't have some VV's that are not part of an AO config also writing to NL because you kill your performance. I would extend that to say AO should have access to all your tiers. The benefit then is you only really have to manage space on your T1 tier (assuming all your usrCPGs point to that).


That's an interesting point. It was my intention do to exactly this, however during installation the HP tech advised it was a bad idea, and to hard-lock things to NL where ever possible. Granted that we only have two tiers (NL and 15k FC), would you still advise moving all hard locked NL stuff to AO?

Perhaps I can use AO across the board; userCPG set to FC for critical things, and userCPG set to NL for non-critical things. Single AO conf on Performance mode. Thoughts?


Good advice guys - thanks a lot.


Top
 Profile  
Reply with quote  
 Post subject: Re: AO, virtual volumes and user CPGs
PostPosted: Wed Feb 04, 2015 7:05 am 

Joined: Mon Aug 20, 2012 1:54 pm
Posts: 33
Location: Atlanta, GA
I would recommend all VV's have their usrCPGs point to an FC teir. The only CPG's pointing to a NL teir are those used by AO.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 12 posts ]  Go to page 1, 2  Next


Who is online

Users browsing this forum: Google [Bot] and 207 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group | DVGFX2 by: Matt