Cross site vMotion, Default Gateway and TCP/IP stacks

Share this:

Possibility of cross vCenter vMotion is quite popular among customers. But the thing is that most of the times the vCenters, or the environments managed by those vCenters, are in different sites or locations. This simply means they do not share same vMotion network. Which means vMotion traffic needs to be routed, which means vMotion enabled vmkernel interface should reach some gateway.

With latest trend of Hybrid and Private on Public clouds, I started to get more and more question on how to solve this situation. Should customers run vMotion on Management Network, so that it can reach default gateway? Or a static route is required on each host which will send vMotion traffic to certain gateway?

Although both the above mentioned methods may work, answer to the questions is ‘there is better way to do it’.

Surprisingly not a lot of people know that starting from vSphere 5.5 there is a  feature called TCP/IP stacks in vSphere. Yes, you can configure several different TCP/IP stack on your ESXi, and each of those will have its own set of Gateway and DNS configuration. After that, during vmkernel interface creation, you can chose which TCP/IP stack to use. This is quite well documented in VMware’s official documentation, but I will still show a brief how-to.

In vSphere 6 there are several TCPIP stack created by default. And you can create custom ones using command line.

Default ones are:

  • Default – default TCP/IP stack used my management network
  • Provisioning – used for Provisioning traffic
  • vMotion – used for vMotion traffic

If you are an NSX user you will notice that a custom stack named vxlan is created during vTEP creation.

Here is where you can find those:

In Web Client, click on your host and navigate to Manage>Network>TCP/IP configuration. There you will see list of TCP/IP stacks. Select the one you need to edit and click the small Pencil icon.

Edit TCP/IP stack for vMotion

In edit mode you can configure Gateway and DNS servers for the given TCP/IP stack.

Once you have it ready, select proper TCP IP stack during vmkernel port creations like shown on screenshot.

Note: Creating vmkernel interface with vMotion TCP/IP stack will disable vMotion on all other vmkernel ports which are using Default TCP/IP stack.

 

Create vmkernel interface for vMotion

Nice and elegant solution, isn’t it?

Now, first time I saw this, my first question was: “Can I create my own TCP/IP stack, for example for iSCSI traffic, or secondary Management interface?” The answer is Yes and No. Yes, because you can create custom TCP/IP stack, but No, you cannot create TCP/IP stack for management network. Actually you cannot set any specific feature when creating TCP/IP stack, you cannot even create second TCP/IP stack for vMotion.

But for iSCSI yes, you can. This can be especially useful for Softlayer users, for those who don’t want their iSCSI traffic to use same VLAN with management network. UPDATE: 23.02.2017 – There are limitations though, and, based on comments from readers bellow, it looks like custom TCP/IP stacks for iSCSi are not supported by VMware. So you may try it, but I would not recommend using this for production.

To create Custom TCP/IP stack, get to the command line interface of your ESXi, either by SSH or direct console. Execute the following command:

esxcli network ip netstack add -N=”NameOfStack”

Then return to your web client and edit the setting of the TCP/IP stack as it was shown above.

That’s it, as simple as that. Hope it helps. Let me know if you have any questions or comments.

The following two tabs change content below.
Aram Avetisyan is an IT specialist with more than 18 years experience. He has rich background in various IT related fields like Cloud, Virtualization and SDN. He holds several industry level certifications including but not limited to VCIX-DCV, VCIX-NV. He is also a vEXPERT in years 2014-2021.

About Aram Avetisyan

Aram Avetisyan is an IT specialist with more than 18 years experience. He has rich background in various IT related fields like Cloud, Virtualization and SDN. He holds several industry level certifications including but not limited to VCIX-DCV, VCIX-NV. He is also a vEXPERT in years 2014-2021.
Bookmark the permalink.

14 Comments

  1. Did you really try mounting an iSCSI (or NFS) datastore with the new created custom TCP/IP stack? It never works for me.

    • Yes, I have iSCSI running on custom tcp/ip stack. The only downside of it is that if vmk is on custom tcpip stack you cannot use it for portbinding. hence no multipathing on ESXi side. But this looks to me more like a Bug than a feature, so hopefully VMware will fix it. Haven’t tried NFS though.

      • Based on my understanding if you are not using port binding then the iSCSI would still use the default netstack because there’s no way to force the iSCSI traffic go through the vmknic created in the custom netstack. Right?

        • Not quite correct. iSCSI without port binding will use the first connection which will have access to storage. means, if you have 2 vmk interfaces, one in default stack and one in custom, and both can connect to storage, the default one will be used. To avoid this , you can use IP based ACLs on storage device, and allow only IPs on vmk interface in custom stack to access storage. That way you will bind the traffic to vmk in custom stack.

          Hope this makes sense.

          • Don’t know why it does not work for me. In my environment there’s no vmknic in the default netstack which can reach the iSCSI server. The vmknic in the custom netstack can vmkping (with -I and -S) to the iSCSI server but the iSCSI datastores cannot be found with storage adapter rescanning.

          • Has to be something with accesses on storage device level. I would check access control mechanism, is it IQN based authentication or IP based ACLs? or both? what about CHAP settings. Does it connect fine as part of default ipstack with static routes?

          • Yes it works fine with the default netstack. I’m using IQN based auth. No magic things.

          • Hard to say remotely, this may sound basics, but you need to check logs on both storage and ESXi. It has to be related to something blocking the communication, especially considering you can ping Storage device using custom stack.

          • Vmkping works fine just because I specified the “-I vmk -S stack” option which forces the ping to go through the specific vmknic/netstack.

          • i would suggest to try with static routes instead of custom stacks

          • Confirmed with developers @vmw. Custom netstacks don’t support iSCSI and NFS.

          • VMware not supporting certain config and config not working is different things. It may not be supported, but it works for me. Unless it’s magic. Any way, I’d better add comment to the article, not to confuse people.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.