I’ve seen often many users asking in forums how to properly configure a network in order to deploy a iSCSI storage, especially in really small environments.
So, I created this post in order to explain how I usually configure a system with an HP StorageWorks P2000 G3 iSCSI storage and a small cluster with 3 vSphere servers. If you are using a different storage, you would probably have to change some configuration, but the principles are always the same.
3 VMware vSphere 5.x servers (usually with a Essentials Plus VMware license) . In these situations, I usually dedicate two network cards to iSCSI. That’s because many modern servers have already 4 NICs onboard, so I have two dedicated connections for iSCSI and the other two for all other connection types, like management, vmotion and virtual machines, without having to buy additional network cards.
2 Gigabit ethernet switches with at leat 7 ports each. If you need to add the iSCSI network to an existing infrastructure and youwant to keep it super simple, avoid VLANs and have total physical separation, you would need at leat 14 network ports. This is also useful if your low-end switches have problems in managing VLANS or you are not an expert with them; iSCSI network will be separated, and not connected to anything else. You will connect 8 ports from storage array and 2 from each server. Total is 14, distributed on two switches for redundancy. If instead the switches are going to manage iSCSI and other networks at the same time (by using VLANs), then you will need more ports, in order to manage all vSphere networks, and uplinks to other parts of the network.
1 storage HP P2000 iSCSI, with two storage processors having 4 iSCSI connections each, as in this picture:
The goal of a iSCSI storage network, as in any storage fabric, is to offer redundant connections from sources to destination (ESXi servers and the P2000 in this scenario), so that none of the possible failures in any elements of the chain can stop the communication between source and destination. This goal can be reached by having two completely separated networks, each managed by a different switch.
A completely redundant iSCSI network has several IP addresses; that’s because each path is made by any source-destination IP combination. In order to simplify the configuration of all the IP addresses, I usually follow this scheme:
All IP addresses of Network 1, marked in green, are using a base address like 10.0.10.xxx, and the 10 of the third octect means the Network 1. By the same scheme, Network 2 has a base address 10.0.20.xxx. Also, all the ESXi port groups are following this numbering scheme. In this way it’s easy to assign addresses and be sure each component lays in the correct network with the right IP address.
Inside any ESXi server, you first of all need to enalbe the software iSCSI initiator; then you create two vmkernel ports and bind them to only one of the two network cards, one per vmkernel port:
Then you input into the Dynamic Discovery all the IP addresses of the iSCSI storage:
After a rescan, you configure the Path Selection Policy of all datastores to be Round Robin, and the final result is going to be like this:
32 thoughts on “Howto configure a small redundant iSCSI infrastructure for VMware”
VMware Essentials Plus package offers an excellent Price/value with easy to configure and manage infrastructure.
In addition, since vSphere 5.0 the iSCSI configuration UI makes the work even easier. Nice and Clean -:).
Possibly we could squeeze a bit more throughput by configuring Jumbo Frames (if supported end to end), but the performance gain isn’t that obvious on 1GbE.
Nice write up Luca.
Your subnet configuration differs from the HP best practices guide as shown below. Port 0 & 2 of controller A & B must be in the same subnet and Port 1 & 3 of controller A & B should be in a second subnet.
IP Address scheme for the controller pair
The P2000 FC/iSCSI combo G3 MSA (for iSCSI traffic) and P2000 G3 10GbE iSCSI MSA use port 0 of each controller as one failover pair, and port 1 of each controller as a second failover pair. Therefore, port 0 of each controller must be in the same subnet, and port 1 of each controller should be in a second subnet.
In the case of the P2000 G3 iSCSI MSA, set up the scheme similar to the following example (with a netmask of 255.255.255.0):
• Controller A port 0: 10.10.10.100
• Controller A port 1: 10.11.10.120
• Controller A port 2: 10.10.10.110
• Controller A port 3: 10.11.10.130
• Controller B port 0: 10.10.10.140
• Controller B port 1: 10.11.10.150
• Controller B port 2: 10.10.10.160
• Controller B port 3: 10.11.10.170
What if I connected one port from MSA controller A directly to one of the server’s physical network port 4 and connected one port of MSA controller B directly connected to the server physical network port 3 and assigned the same IP addresses you are presenting, Does this considered a good practice or is it not recommended?
(in my case NO network switch is involved, and the management port is connected to an internal LAN switch)
Hi, I’m not sure the iSCSI version of P2000 can be connected in Direct Attach, it’s better to check HP for this. I would say it could be done in theory, but needs testing.
Great writeup and diagrams! Pretty much what I’ve been looking for for our small environment.
The only difference is we have a P2000 FC/iSCSI dual cards that was rec’d by a consultant (P2000 currently used to backup a physical server with 4TB of RAW/TIFF image data via FC on A1,B1 and HBA card in server). So I’m wondering, would I be able to run 3 hosts, two HP 1910-24G switches, only connected to 4 iSCSI ports (SP A: A3, A4, SP B: B3,B4)? Our nine VMs are light servers.
The A2,B2 FC is also available, but not sure if we could add HBA to one of the hosts directly to SAN or even mix FC and iSCSI hosts.
yes you can, you have at least two path in each storage processor, so you can connect each of them to thw two different switches.
Luca, figured that was the case and appreciate your confirmation. Looking froward to building out this setup and learning more about vSphere and SAN storage in the process!
How do I know how many virtual servers I can connect to my hp msa 2012i Iscsi storage?
Ray, this depends on the expected IOPS of the virtual servers you are going to run, and so you can configure the P2000 with the proper amount of disks, and the different types of disks. Per se, the storage has also FC connectivity up to 16Gb, so the network bandwidth is not a limit, you will hit first the limit of the storage processors or the disks you are using.
Great walkthough. We have configured our environment the same way, but going through the VMware iSCSI best practices, it looks like this is not a recommended configuration. There is also a KB that I have been puzzling over: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2038869.
I’d love to hear your thoughts on this.
Which part of that KB are you referring to? I do port binding in my configurations too.
As I said, some would disagree with my configuration, but I’m fine for it. This one has served me great in years, so I don’t feel I’m going to change it.
The main question I have is if port bindings should never be used if you have your iSCSI initiators and targets are on multiple VLANs. As I mentioned, we have our environment configured similar to how you have described above, but we are researching if we should turn off port bindings to eliminate any potential issues.
It seems like all of the iSCSI configuration examples I have come across utilize port bindings, regardless of the number of VLANs used.
Ah, ok. Yes, I’ve always used port binding even with separated physical links, otherwise multipath does not work correctly.
“In this sample scenario, there are multiple VMkernel ports in on different IP subnets and broadcast domains and the target ports also reside in a different IP subnet and broadcast domain. In this case, you should not use port binding.If you configure port binding in this configuration, you may experience these issues:
•Rescan times take longer than usual.
•Incorrect number of paths are seen per device.
•Unable to see any storage from the storage device.”
Not a good idea to use port binding with this configuration, see also kb.vmware.com/kb/1009524
Any recommendation on an inexpensive iSCSI switch for this config? Completely isolated.
There are for sure others, but I had good results with HP 1910 or Netgear SG-300.
Dear friends, I need advice from you. My case is that we have recently purchased a P2000 G3 10GbE iSCSI storage with 2 controllers, and we need to connect 2 switches Netgear GS728TS. To do this, install 2 modules in each switch SFP AGM731F (1Gbps) and each controller P2000, 2 SFP + modules (HP 10Gb SR SFP + – HP code: 455855-001), and it turns out that we have no connectivity between the two teams, in fact we have not even link (all off as if nothing had not connected up). Before buying the P2000 did consulting HP to see if it was possible to communicate with the P2000 at 1Gbps instead of 10Gbps (its default speed), and answered that it was possible, but we always left wondering if it would be can connect an SFP + (10Gbps) with another SFP module (1Gbps). Thank you very much in advance for your cooperation.
I’ve never used the 10G version of the P2000, so I’m not sure how the connection into 1G networks will work. If HP has stated that it’s possible, your best option is to engage them and have them look into the issue you are facing.
If I read this correctly, I have a stack with 2 switches, like in the picture above, so my those iScsi VLAN, the RED ones on the top switch, they all use 1 VLAN and the bottom GREEN ones, use their own 2nd VLAN? If I have them mixed, it means I am crossing paths, right!?
Since you are going to use two physical switches there’s actually no need for VLAN, traffic is already separated at the physical level between the two networks.
What are the steps needed to set the IP Addresses for the MSA 2040 iSCSI Ports?
Hi, you need to go into the configuration web page of the storage array via the management port, and configure the IPs on all the different iscsi ports. You can find details on how to do this on the HP documentation.
i have just found your website and like what i have read. Our company adopted virtualisation based on ESXi about 4 years ago.
I am about to add a 3rd host to our VMware Essentials Plus installation. The original configuration with 2 host and iSCSI based shared storage was done by a reseller. The configuration has two gigabit switches and 2 dedicated interfaces per host for iSCSI. In their design though, all the iSCSI interfaces are on the same subnet. Could you tell me why the best practice is to split them?
Splitting the networks is better to have the binding at the vmkernel ports, together with the explicit failover of the network uplinks in the vmkernel. Each network becomes a path for the iscsi multipath protocol.
What are you doing with the Host tab on the P2000? Does each vSphere host have it’s own “Host” in the P2000 with all ports? Or are you using just 1 “Host” on the P2000, with all vSphere hosts connecting to it?
I’ve created one host for each ESXi server and mapped the P2000 LUNs everytime.
Thank you for this. I am using this for a new EMC, we moved from a DAS to a SAN, and this helped set up the infrastructure. I have a question about moving from 4 Ip’s in each SPA and SPB, to 2 IP’s in each SPA and SPB. I would like to set up a NAS Server, but the NAS server needs 2 IP’s in each SPA and SPB (for redundancy) The setup also asks for the switches to be connected and in the same network as my workstations. Do you see any issues connecting the switches and using only 2 IP’s in each SPA and SPB?
I’m sure the same design can be used even when each SP has only two connections or two IPs, just remove the redundant paths from my design.
I ‘ve one question which is out of main subject.
What software did you use to create this network idagram including details of the ports?
Omnigraffle for Mac OS X.
Do you see any issues in connecting each Switch together, and to my main network?
No as long as you can guarantee the proper bandwidth for the iscsi traffic. The reason to use two physical switches was that in small environments, they are usually unmanaged, so there are for example no VLANs. And still VLAN is a logical segregation of traffic, but there’s usually no real guaranteed bandwidth unless you have high-end switches that can do that.
Comments are closed.