Default ESXi 5.5 values which need to be changed

Share this:

We get improvement in default values compared with vSphere 5.1 but there are still default values which you need to change directly after host installation

Network
MTU is set by default to 1500 have to be changed to 1600

VMware recommends for VXLAN setting physical and virtual switch MTU to 1600, rather than the default of 1500. This takes into account a guest standard MTU of 1514 bytes and optional 802.1PQ tag of four bytes, giving the the guest an ethernet frame of 1518 bytes. A port group provisioned with a standard or distributed switch that uses VLAN adds another four bytes, bringing the ethernet frame to 1522 bytes.

IPv4 example:

SA, DA, eth_type 0x0800: 14 bytes
IPv4 header: 20 bytes
UDP header: 8 bytes
VDL2 header: 8 bytes
VDL2 IPv4 encapsulated frame: 1522 bytes + 14 bytes  + 20 bytes + 8 bytes + 8 bytes = 1572 bytes

 

VMware KB2005900

Network
Increasing the maximum number of ports designated for an ESXi 5.5 host

Standard switch default is 120 ports
Imagine you have 3 hosts and 250 VMs, this configuration will run good, but what in case of host failure?
You will have 2 host and 250 VMs, in case of HA you have 240 ports available and 10 VMs will be without network.

Distributed switch default is 256 port – this could be ok, but you need to calculate to be sure.

VMware KB2008095

Storage
Default configuration allows for only eight NFS mounts per ESXi host.

NFS.MaxVolumes: Limits the number of NFS datastores which can be mounted by the vSphere ESXi/ESX host concurrently. The default value need to be increased as well with other parameters see details in KB article.

VMware.com KB2239

The following two tabs change content below.

Roman Macek

VMware & Virtualization SME
Roman grew up with Microsoft server administration in 2006 he switched to VMware. He focuses on designing of solutions for hybrid cloud, business continuity, high availability, disaster recovery, backup and vdi. He has background and mindset in hardware and VMware operating systems administration. He holds certifications from Microsoft, VMware and other industry vendors. VCAP5-DCD, VCAP5-DCA, VCP5, VCP5-DT, MCITP, MCTS, vExpert 2015/16.

Latest posts by Roman Macek (see all)

About Roman Macek

Roman grew up with Microsoft server administration in 2006 he switched to VMware. He focuses on designing of solutions for hybrid cloud, business continuity, high availability, disaster recovery, backup and vdi. He has background and mindset in hardware and VMware operating systems administration. He holds certifications from Microsoft, VMware and other industry vendors. VCAP5-DCD, VCAP5-DCA, VCP5, VCP5-DT, MCITP, MCTS, vExpert 2015/16.
Bookmark the permalink.

4 Comments

  1. Hello,

    Also seen many times that that max.disk.io.size is not changed from default value. I have seen this on several customer systems causing problems with latency.
    Default is 32mb,with most storage 128kb works lot better. 3par optimum is 1mb as it has bit different way of handling data.

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2036863

    Then on default if I remember correctlly FiberChannel MPIO policy is defaulted to MRU(Most recently used) meaning that only one path is used all the time. Depending on your array type check manufacturers documentation for this setting. Round robin is most used mpio policy but then you still need to have correct array type.. Is it alua aware etc.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.