Hosted VMs, VLANs, And Firewalld

Home » CentOS » Hosted VMs, VLANs, And Firewalld
CentOS 4 Comments

I’m looking for some information regarding the interaction of KVM, VLANs, firewalld, and the kernel’s forwarding configuration. I would appreciate input especially from anyone already running a similar configuration in production. In short, I’m trying to figure out if a current configuration is inadvertently opening up traffic across network segments.

On earlier versions of CentOS I’ve run HA clusters with and without VMs (in this case, based on xen). On those clusters, both the host machine’s IPs and the VM IPs were in the same subnet (call it the DMZ).

In a CentOS 7 test HA cluster I’m building I want both traditional services running on the cluster and VMs running on both nodes (not necessarily under control of the cluster). In the new setup, I’d like to retain *some* VMs on the same subnet as the host machine’s IP, however have other VMs on different VLANs. So the physical topology looks like this:

—————– DMZ —————-

4 thoughts on - Hosted VMs, VLANs, And Firewalld

  • On a purely subjective note: I think that’s a bad design. One of the primary benefits of virtualization and other containers is isolating the applications you run from the base OS. Putting services other than virtualization into the system that runs virtualization just makes upgrade more difficult later.

    What do you mean by “essentially”?

    That doesn’t make any sense at all. In what way are enp1s0.2 and enp1s0.3 layered on top of the bridge device?

    Look at the output of “brctl show”. Are those two devices slaves of br2, like enp1s0 is? If so, you’re bridging the network segments.

    You should have individual bridges for enp1s0, enp1s0.2 and enp1s0.3.
    If there were any IP addresses needed by the KVM hosts, those would be on the bridge devices, just like on br0.

    How? Are you using macvtap for those? I’d suggest sticking with one of either bridged networking or macvtap.

    Correct:
    /usr/lib/sysctl.d/00-system.conf:# Disable netfilter on bridges.
    /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-ip6tables = 0
    /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-iptables = 0
    /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-arptables = 0

    Interfaces are part of some zone, whether an address is assigned or not. In terms of implementation, that means that filtering is set up before addresses. If you set up addresses and then filtering, there’s a
    *very* brief window where traffic isn’t filtered, and that is bad.

    Not unless you change the net.bridge.bridge-nf-call-* settings.

    No, you don’t. It’s active because libvirtd defines a NAT network by default, and that one requires IP forwarding.

    Not in the default firewalld rule set.

    Examine the output of “iptables -L -nv” and check all of the ACCEPT rules.

  • As a side node it is actually possible now to have one bridge to manage multiple independent vlans. Unfortunately this is basically undocumented
    (at least I can’t find any decent documentation about this). One user of this is Cumulus Linux:
    https://support.cumulusnetworks.com/hc/en-us/articles/204909397-Comparing-Traditional-Bridge-Mode-to-VLAN-aware-Bridge-Mode

    Apparently you can manage this with the “bridge” command. Here is what i get on my Fedora 22 System:

    0 dennis@nexus ~ $ bridge fdb
    01:00:5e:00:00:01 dev enp4s0 self permanent
    33:33:00:00:00:01 dev enp4s0 self permanent
    33:33:ff:ef:69:e6 dev enp4s0 self permanent
    01:00:5e:00:00:fb dev enp4s0 self permanent
    01:00:5e:00:00:01 dev virbr0 self permanent
    01:00:5e:00:00:fb dev virbr0 self permanent
    52:54:00:d3:ca:6b dev virbr0-nic master virbr0 permanent
    52:54:00:d3:ca:6b dev virbr0-nic vlan 1 master virbr0 permanent
    01:00:5e:00:00:01 dev virbr1 self permanent
    52:54:00:a6:af:5d dev virbr1-nic vlan 1 master virbr1 permanent
    52:54:00:a6:af:5d dev virbr1-nic master virbr1 permanent
    0 dennis@nexus ~ $ bridge vlan port vlan ids virbr0 1 PVID Egress Untagged

    virbr0-nic 1 PVID Egress Untagged

    virbr1 1 PVID Egress Untagged

    virbr1-nic 1 PVID Egress Untagged

    I’m not sure if the CentOS 7 kernel is recent enough to support this but I thought I’d mention this anyway to make people aware that the “one bridge per vlan” model is no longer the only one in existence.

    Regards,
    Dennis

  • –I understand. In this case the primary role of these machines is for a non-virtualized HA cluster. Where the VMs enter the picture is for a small number of services that I’d prefer to be isolated from the DMZ, and in this case there is sensitivity to the physical machine count. I’m aware of how this affects upgrades, having been through the cycle a few times. It is what it is. (But thanks.)

    The default routes for the DMZ, vlan2, and vlan3 go to different interfaces of the same (OpenBSD) firewall cluster, however from the perspective of both the physical nodes and the VMs, they are different default routes. The firewall cluster itself is multihomed on the upstream side, but again that is not visible to the nodes and VMs.

    The fact that both the cluster and the VMs are protected by the OpenBSD firewalls is the reason that I’m primarily concerned with vectors coming from the non-DMZ VMs onto the DMZ via the hosts.

    No, it doesn’t. Brain fart on my part, too tired, too many noisy distractions from kids, too many cosmic rays, or something :)

    br0 is layered on eno1
    br2 is layered on enp1s0.2.
    br3 is layered on enp1s0.3.

    The non-DMZ VMs get connected to br2 and br3.

    However, in this case the host won’t have addresses on (based on my above correction) either br2 or br3. It does sound, though, like having enp1so, enp1s0.2, and enpe1s0.3 in the ‘DMZ’ zone means that filtering rules on the host will affect inbound traffic to the VMs on br2 and br3.

    At least that question is easy to empirically verify, and if so, then it would argue that the three enp1s0* interfaces should be in their own zone, presumably with a lenient rule set.

    Ah. That makes sense. So in this case where I don’t need a NAT
    network in the libvirtd config, I should be able to eliminate it and thus eliminate the forwarding sysctls.

    Thanks for all of your feedback.

    Devin

  • No, because:

    /usr/lib/sysctl.d/00-system.conf:# Disable netfilter on bridges.
    /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-ip6tables = 0
    /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-iptables = 0
    /usr/lib/sysctl.d/00-system.conf:net.bridge.bridge-nf-call-arptables = 0

    (Unless you change the defaults)