HPE Storage Users Group
https://3parug.com/

ESX Cluster zoning
https://3parug.com/viewtopic.php?f=18&t=222
Page 1 of 1

Author:  tiger_woods [ Thu Nov 08, 2012 7:42 pm ]
Post subject:  ESX Cluster zoning

Hi,

I have at least 4 ESX hosts (dual HBA) as part of a cluster. What's the best practice to zone these with the 3PAR to have maximum redundancy and optimal performance?

I connected my 4 node, 4 host port each v400 on the switch as:

Node0, port1 - switch A
Node0, port2 - Switch B
Node0, port3 - switch A
Node0, port4 - Switch B

This is replicated to the other 3 nodes.

I also need some guidance for the standalones :). Thanks.

Mel

Author:  Christian [ Fri Nov 09, 2012 3:52 am ]
Post subject:  Re: ESX Cluster zoning

I would just give them 1 port on each node and make sure that you have 1 port in Switch A and 1 port in Switch B per node pair.

Also when you share out the volume a tip is to create a Volume Set and a host set.

Author:  tiger_woods [ Fri Nov 09, 2012 3:51 pm ]
Post subject:  Re: ESX Cluster zoning

Thanks for your reply, Christian.

Author:  Richard Siemers [ Fri Nov 16, 2012 1:23 am ]
Post subject:  Re: ESX Cluster zoning

I strongly recommend one path to each node, with a 1:2 fan out ratio.

Host port 1 to SANSWITCH-A to Node0,Node1
Host port 2 to SANSWITCH-B to Node2,Node3

When I create the aliases for a new inserv, I will divvy them up into "sets" and rotate hosts around the sets to keep all the front end ports balanced easier. example:

Node0, port1 - switch A Alias: V400_0-3-1_SET1
Node0, port2 - Switch B Alias: V400_0-3-2_SET2
Node0, port3 - switch A Alias: V400_0-3-3_SET3
Node0, port4 - Switch B Alias: V400_0-3-4_SET4

Node1, port1 - switch A Alias: V400_1-3-1_SET2
Node1, port2 - Switch B Alias: V400_1-3-2_SET1
Node1, port3 - switch A Alias: V400_1-3-3_SET4
Node1, port4 - Switch B Alias: V400_1-3-4_SET3

Node2, port1 - switch A Alias: V400_2-3-1_SET3
Node2, port2 - Switch B Alias: V400_2-3-2_SET4
Node2, port3 - switch A Alias: V400_2-3-3_SET1
Node2, port4 - Switch B Alias: V400_2-3-4_SET2

Node3, port1 - switch A Alias: V400_3-3-1_SET4
Node3, port2 - Switch B Alias: V400_3-3-2_SET3
Node3, port3 - switch A Alias: V400_3-3-3_SET2
Node3, port4 - Switch B Alias: V400_3-3-4_SET1

So Host1 would get zoned to SET1, Host2 to SET2, and so on.

The benefit of this is you can have a SAN-A or B network go down, and your hosts will still be making balanced use of all 4 nodes. If a node goes offline for maintenance or unplanned outage, its a 25% hit to all your hosts, and not a 50% hit to some. Assuming you're using 8 gbit ports on the storage, and 8 gbit ports on the host... this waters down the impact a single host can have on your system's front end ports... if you use a 1:1 zoning of host port to Inserve port, then a poorly written app could blast 8 gbits of data and saturate a storage port also running at 8 gbit, thus causing high wait times for other hosts that share those ports. If you use the 1:2 zoning, your 8 gbit of host HBA gets round robin'ed to split to multiple 8 gbit inserve storage hbas.... thus that monster query that puts the host HBAs at 100% will only bring the storage ports in the set to 50%, leaving breathing room for other systems that share those ports.

Author:  Architect [ Thu Dec 06, 2012 2:33 pm ]
Post subject:  Re: ESX Cluster zoning

We do the same with our 3pars.
@Richard, do you also put all the even ports of each 3par host hba in the 2nd fabric and the uneven ports in the first fabric? (thus having all four nodes in both fabrics)

WIth the (VMware) clusters though, we decided to keep each cluster on the same 2 sets. We do this because, when we need to troubleshoot (e.g. performance) issue's with vm's on the cluster, it would be quite hard to understand which port is giving issue's (as vm's tend to hop from server to server based on cluster load balancing). eventually by spreading all clusters and systems over the ports we get a decent mix of the systems over the ports anyway.

So my only added advice would be to keep clusters together on the sets. I don't believe that a normal VMware cluster would generate enough throughput to soak 4 8gbit port anyway.
And if they would, well just do a 1 to 4 fanout and give each hostport access to all 4 nodes.

Page 1 of 1 All times are UTC - 5 hours
Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group
http://www.phpbb.com/