KVM Virtualization Network VLAN CentOS7

Home » CentOS » KVM Virtualization Network VLAN CentOS7
CentOS 12 Comments

Im configuring my KVM and in my network configuration I have 4 Network lancard

2 nic = using teaming0 for management with access port configured in the switch side

2 nic = using teaming1 for guest VM DATA ports. and in the switch is configured for LACP with trunk allowing vlan 10,20,30,40,50

and configured in the CentOS7 the vlan 10, vlan 20,30,40,50 im sure its already working because I tried to use one vlan and ping was successful.

my question is can I assigned directly the ‘team1_vlan10, team1_vlan20.. and so on to directly use in my guest VM instead of bridging the network config in the VM.xml file? I tried to google and found only about bridging it first and could not find the direct connection in the DATA port Teaming1

Thank you in advance

12 thoughts on - KVM Virtualization Network VLAN CentOS7

  • Teaming1

    What you’ll need to do is from your team interface create the tagged interfaces. Then create a bridge for each vlan and associate the tagged interface with the bridge.

    When the guest is connected to that bridge the traffic will then be tagged by the vmhost on the way out so you don’t need any tagged configuration in the guest.

  • Note that this is pretty much the last use case you cannot use NetworkManager for but need the legacy network service … to save you some pain in trying to configure it ;)

  • Yes, you can use macvtap. Note that in the default mode, the guest will not be able to communicate with the host. AFAIK, when using macvtap, you will not have the option of controlling guest network using iptables on the host, as you do when using a bridged network device.

  • Hello James,

    Wednesday, April 6, 2016, 5:34:26 PM, you wrote:

    I disagree… NetworkManager works perfectly.

  • Ah I recall slightly incorrectly form my testing last May … to be fair it’s been nearly a year since I last looked at this ;)

    https://www.hogarthuk.com/?q=node/8

    it’s nmcli specifically that wouldn’t work as the bridge slave had an ethernet type requirement so wouldn’t work if the type was bond or vlan

    NM was able to create the connection from the file configuration however
    …. my apologies

    Here’s the bug with the limitation due to be fixed in the upcoming 7.3

    https://bugzilla.redhat.com/show_bug.cgi?id83420

    That’s one less reason to use the legacy network service then!

  • I tried last night and it seems like the vlans that I created is failing to bring-up. So this is what Im doing

    in my DATA trunk port at my HomeLab Im doing this

    2 port configured for teaming1 (set as link only port) -> Create Bridge
    (br0 port) on top of the teaming1 -> create slave vlan10,20,30,40 and the parent is br0. Am I doing this correctly? Thanks

  • Hello FrancisM,

    Thursday, April 7, 2016, 8:03:38 AM, you wrote:

    not exactly.

    nmcli connection add con-name br10 ifname br10 type bridge stp no nmcli connection down br10
    nmcli connection edit br10
    set ipv4.method disabled set ipv6.method ignore save quit nmcli connection up br10

    nmcli connection add con-name vlan10 ifname vlan10 type vlan dev teaming1 id 10
    nmcli connection down vlan10
    nmcli connection edit vlan10
    set connection.master br10
    set connection.slave-type bridge verify fix save exit nmcli connection up vlan10

    similarly for br20+vlan20, br30+vlan30, br40+vlan40

  • Hi,

    After trying the method using the ‘nmcli’ its the same error result as what I encountered in using the ‘nmtui’ by the way after configuring then check it in the nmtui its the same configuration as what I have done earlier.

    [root@server network-scripts]# systemctl status network.service -l
    ● network.service – LSB: Bring up/down networking
    Loaded: loaded (/etc/rc.d/init.d/network)
    Active: failed (Result: exit-code) since Tue 2015-04-07 17:19:00 SGT;
    1min 13s ago
    Docs: man:systemd-sysv-generator(8)
    Process: 48732 ExecStop=/etc/rc.d/init.d/network stop (code=exited, status=0/SUCCESS)
    Process: 49302 ExecStart=/etc/rc.d/init.d/network start (code=exited, status=1/FAILURE)

    Apr 07 17:17:28 server network[49302]: [ OK ]
    Apr 07 17:17:29 server network[49302]: Bringing up interface team1_slave_enp5s0f1: Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/69)
    Apr 07 17:17:29 server network[49302]: [ OK ]
    Apr 07 17:19:00 server network[49302]: *Bringing up interface vlan247:
    Error: Timeout 90 sec expired.*
    Apr 07 17:19:00 server network[49302]: *[FAILED]*
    Apr 07 17:19:00 server network[49302]: Bringing up interface br247: [ OK
    ]
    Apr 07 17:19:00 server systemd[1]: network.service: control process exited, code=exited status=1
    Apr 07 17:19:00 server systemd[1]: Failed to start LSB: Bring up/down networking. Apr 07 17:19:00 server systemd[1]: Unit network.service entered failed state. Apr 07 17:19:00 server systemd[1]: network.service failed.
    [root@server network-scripts]#

    I will try to use other configuration on this bridge. Does it matter if I
    use TEAMING and not BONDING on this bridging configuration because the two nic card that is trunk is configured as TEAMING in my CentOS7

  • Hello FrancisM,

    Thursday, April 7, 2016, 11:20:20 AM, you wrote:

    You mast use or NetworkManager.service or network.service, not both

    systemctl disable network.service systemctl enable NetworkManager.service

    nmcli connection show

    I am using teaming +bridge +vlan. All works as expected/