We have a host that has multiple IPv4 addresses aliased to eth0. The primary address is 216.185.71.x and the alias is 192.168.6.x.
This host connects to devices on both netblocks without problems. Only default routing is used and it looks like this:
192.168.6.0/24 dev eth0 proto kernel scope link src 192.168.6.x
188.8.131.52/24 dev eth0 proto kernel scope link src 216.185.71.x
169.254.0.0/16 dev eth0 scope link metric 1002
default via 192.168.6.1 dev eth0 src 192.168.6.x default via 184.108.40.206 dev eth0
When the system connects to internal systems via SSH it uses the src
216.185.71.x for devices on that netblock and 192.168.6.x for devices on the other.
The problem is that when we try to establish an SSH connection off-site to another netblock altogether the host uses 192.168.6.x as the source and the destination gets the public side IP address of our gateway router as the point of origin due to masquerading.
I have solved this by explicitly binding SSH to the public ipv4 when connecting using the –bind!6.185.71.x parameter. But I have two questions I would like to find answers for
1. Why is SSH using the private IP in preference to the public IP when connecting to off-site addresses?
2. How does one configure the routing table on network startup to specifically detail the route particular addresses are supposed to take?
For diagnosis here are the ifcfg scripts used for both interfaces: