Hoping the hive-mind can help me out here. I have two esxi hosts connected to NFS datastores. I created a virtual switch that has two portgroups attached to it. One for Management and one for storage. Both portgroups are setup on the default TCP/IP stack and both are setup for their own VMKernel NIC. For some reason, ONE of the two hosts has decided that it wants to talk to the NFS shares through the management NIC even though the NFS share is on the same subnet as the storage NIC. The management NIC is designated with the default gateway. I can't find a setting difference between the two hosts and the routing table appears ok to me.
Network Netmask Gateway Interface Source
--------- ------------- --------- --------- ------
default 0.0.0.0 10.55.55.1 vmk2 DHCP
10.55.55.0 255.255.255.0 0.0.0.0 vmk2 MANUAL
10.44.1.0 255.255.255.0 0.0.0.0 vmk0 MANUAL
The NFS share is at 10.44.1.123, so I would think it would want to go over the vmk0 interface, but it isn't for one host... any ideas why not?
Network Netmask Gateway Interface Source
--------- ------------- --------- --------- ------
default 0.0.0.0 10.55.55.1 vmk2 DHCP
10.55.55.0 255.255.255.0 0.0.0.0 vmk2 MANUAL
10.44.1.0 255.255.255.0 0.0.0.0 vmk0 MANUAL
The NFS share is at 10.44.1.123, so I would think it would want to go over the vmk0 interface, but it isn't for one host... any ideas why not?